Home > MarketWatch > Industry News
The world's first cross-state AI gigafactory is launched, and the computing power war has entered a new era!
Time:2025-11-21

26344227-pZzW4O.jpg?auth_key=1763913599-

Recently, Microsoft officially launched its first "AI Gigafactory" - not a single building, but a whole system that connects multiple data centers distributed in different states into a single system to create a "virtual supercomputer".


Through this distributed architecture, Microsoft can integrate originally scattered computing power resources and train AI models on an unprecedented scale. Complex training tasks that used to take months can now be done in weeks.


This marks a new stage in the development of AI infrastructure: from a separate data center to a cross-regional collaboration and a large and efficient AI computing system like an interplanetary network.


01


The world's first cross-state "AI Gigafactory"

In October, Microsoft's next-generation AI data center in Atlanta, named "Fairwater 2", was put into operation and connected to Wisconsin's first Fairwater site in real time via a dedicated high-speed network. The entire system integrates hundreds of thousands of the latest NVIDIA Blackwell GPUs, becoming the world's first cross-state collaborative AI computing cluster.

26344227-R7zYFc.jpg?auth_key=1763913599-


Why do this?


In the past, each data center operated independently; but now, AI models are getting bigger and bigger, training tasks are becoming more and more complex, and computing power in one place alone is no longer enough.


Alistair Speirs, head of infrastructure at Microsoft Azure, explained: "Traditional data centers are designed for millions of different customers to run various small tasks; and our 'AI Gigafactory' is to run a huge AI task on millions of pieces of hardware at the same time. ”


The system is also equipped with exabytes (equivalent to billions of GB) of storage and millions of CPU cores, with the goal of supporting future ultra-large AI models with trillions of parameters - covering the whole process of pre-training, fine-tuning, reinforcement learning, evaluation and so on.


At present, technology giants are fiercely competing for AI computing power advantages. According to the Wall Street Journal, Microsoft plans to double the total area of its data centers in the next two years. In the last fiscal quarter, Microsoft's capital expenditure exceeded $34 billion and continues to increase. The industry expects global technology companies to invest as much as $400 billion in AI this year.


This "AI Gigafactory" is not only a technological breakthrough, but also a key step for Microsoft to respond to the explosive demand for computing power and consolidate its leading position in the field of AI infrastructure.


Its customers include: OpenAI, Microsoft's own Copilot, France's Mistral AI, and even Elon Musk's xAI - almost all of the top AI players today.


02


| AI WAN and computing power allocation strategy

Core technologies optimized for AI:

  • High-density dual-layer structure: Through the innovative two-layer layout, more GPUs can be installed in a smaller space, shortening the communication distance between chips and significantly reducing latency.

  • Top Computing Power Configuration: Powered by NVIDIA's latest GB200 NVL72 rack system, scalable to hundreds of thousands of Blackwell-based GPUs.

  • High-efficiency liquid cooling system: To cope with the huge amount of heat generated by high-density GPUs, Microsoft has developed closed-loop liquid cooling technology that consumes almost no water - the initial water injection of the entire system is equivalent to the water consumption of 20 American households in a year.

  • Internal high-speed interconnection: All GPUs are tightly connected through ultra-high-speed networks, ensuring fast and seamless data flow between chips.


Scott Guthrie, head of Microsoft's Cloud & AI division, said: "AI leadership is not just about stacking more GPUs, the key is to make them work together like a whole. ”


Fairwater is the culmination of Microsoft's years of engineering experience, with the goal of supporting the explosive demand for future AI with real performance.


Relying on a dedicated "AI highway", a single data center can no longer meet the training needs of trillion-parameter models. To this end, Microsoft has built an AI wide area network (AI WAN), laying 120,000 miles of dedicated optical fiber to connect data centers distributed in different states to achieve data transmission at near the speed of light and without congestion


Mark Russinovich noted: "As long as there is a lag in the network, the whole training will stop. Our goal is to keep every GPU running at full capacity. ”


The choice of multi-site layout rather than centralized construction is mainly due to practical limitations: it is difficult for a single area to provide sufficient land and stable electricity; Distributing the load across multiple grids avoids excessive stress on local communities.


03


|The "computing power arms race" under the surge in demand

Microsoft has built an "AI Gigafactory" to cope with the explosive growth of AI computing power demand and stay ahead of the fierce industry competition. Although the company has previously adjusted some data center leasing plans, head Alistair Speirs emphasized that this is only an "optimization of capacity planning" - customer demand has long exceeded supply capacity.


Microsoft is not the only player in this race:

  • Amazon is building a hyperscale data center cluster called "Project Rainier" in Indiana, covering 1,200 acres and expected to consume up to 2.2 GW of electricity;

  • Meta and Oracle also announced huge new construction plans;

  • AI startup Anthropic also announced that it will invest $50 billion in computing infrastructure in the United States.


Faced with this situation, Microsoft chose a differentiated path: not only to build more computer rooms, but to connect multiple data centers into a unified distributed system. This not only improves overall efficiency, but also better serves the needs of top customers such as OpenAI, Mistral, and xAI for ultra-large model training.


As Microsoft executive Scott Guthrie said: "We make these AI sites work together as a whole to help customers turn breakthrough models into reality."


In this "arms race" of global AI infrastructure, whoever has more powerful, more efficient, and more collaborative computing power may have the right to speak in the future.


04


How to participate in this "wave of computing infrastructure"?

1. Direct target: core players in AI infrastructure

Microsoft (MSFT): Not only a software giant, but also a scarce AI computing power operator in the world;

Nvidia (NVDA): Blackwell chips are the "heart" of gigafactories, and demand continues to explode;

Taiwan Semiconductor Manufacturing Co., Ltd. (TSM): The monopoly on advanced processes, AI chip foundry is irreplaceable.


2. Indirect benefits: computing power ecological chain

Liquid cooling technology: such as CoolIT, domestic Gaolan shares, Invic;

High-speed optical modules: 800G/1.6T demand surged, focusing on Zhongji Innolight and Xinyisheng;

Data center REITs, such as Digital Realty (DLR), have benefited from AI land expansion in the long term.


Risk warning: high investment≠ high return

Microsoft said frankly: the current demand for computing power far exceeds the supply capacity, but it is doubtful whether customers can continue to pay; If the training efficiency of AI models improves (such as the popularization of MoE architecture), the demand for unit computing power may decrease; Policy risks: There have been controversies over "data center power grabbing" in many places in the United States, and they may face regulatory restrictions in the future.


TEL:
18117862238
Email:yumiao@jt-capital.com.cn
Address:20th floor, Taihe · international financial center, high tech Zone, Chengdu

Copyright © 2021 jt-capital.com.cn All Rights Reserved 

Copyright: JamThame capital 粤ICP备2022003949号-1  

LINKS

Copyright © 2021 jt-capital.com.cn All Rights Reserved 

Copyright: JamThame capital 粤ICP备2022003949号-1