
Google is spending real money to build its own AI "circle of friends": it is willing to pay for the data center to build infrastructure, but there is a condition - it must use Google's own TPU chips.
At the same time, AMD is also pushing its own AI chips: providing guarantees for customer loans and lowering the financial threshold for them to purchase AMD chips.
The market generally believes that the practices of these two companies are actually copying Nvidia's earlier "CoreWeave model" - that is, bypassing Amazon, Microsoft, Google, which have self-developed chips or are deeply bound by Nvidia, and instead supporting a number of emerging AI cloud service providers (called "Neocloud") to turn them into their own "direct troops", so as to grab a larger share of the AI chip market.
|Winning over mining companies just to push their own AI chips
According to people familiar with the matter, Google is planning to invest about $100 million in AI cloud computing startup Fluidstack, valued at up to $7.5 billion.
But it's not just about "investing money to earn returns" - it's also a strategic card position.
"Emerging cloud" companies such as Fluidstack specialize in providing computing power to AI companies and are an important new force in the current market. Previously, Nvidia relied heavily on supporting CoreWeave (allowing it to purchase GPUs in large quantities), successfully bypassing traditional cloud giants such as Amazon and Microsoft to open up new markets.
Now, Google also wants to follow Nvidia's path: through investment, it will help Fluidstack expand rapidly, and at the same time promote more computing power providers to use Google's own TPU chips.
Not only that, but Google is also targeting cryptocurrency miners in transition.
Companies like Hut 8, Cipher Mining, and TeraWulf, which have ready-made large-scale data centers in their hands, are rushing to shift from "mining" to "running AI". Google has backstopped financing for their transformation projects – with clear terms: use my TPU.
AMD's aggressive "bet": can't sell? I'll rent!
Compared to Google's direct investment, AMD's approach is bolder and more risky.
AMD reportedly provided a strong guarantee for a $300 million loan from AI data center startup Crusoe — the money came from Goldman Sachs specifically to buy AMD's AI chips.
The most ruthless is one of the "bottom-up" clauses: if Crusoe buys chips but can't find customers to use them, AMD promises to pay for the chips to rent them out!
It is equivalent to saying, you just buy it, whether you use it or not, I will cover it. This kind of operation can quickly increase sales in the short term and help AMD grab market share
But the risk is also extremely high, in case the AI boom cools down and demand declines, these "rented back" chips will become AMD's own burden, directly dragging down the company's finances.
Why do Google and AMD take a "detour"? Because the main road is blocked!
Big cloud vendors (such as Amazon AWS, Microsoft Azure) are the largest buyers of AI chips, but they have little interest in Google's TPUs - on the one hand, they treat Google as a competitor; on the other hand, Amazon itself is also pushing for self-developed AI chips and does not want to use other people's at all
The result: Google's TPU is difficult to break into the mainstream cloud market.
Therefore, like AMD, it can only find another way - to support neutral "emerging cloud" companies (such as Fluidstack, Crusoe) and open up sales channels through these new players to bypass the blockade of giants.
There is also a "fight" inside: should the TPU team be split?
In order to accelerate the commercialization of TPU, some people within Google even proposed: separate the TPU team and turn it into a separate company, so as to attract external investment and accelerate expansion. However, Google officials quickly denied this claim, emphasizing: "TPU must be deeply synergized with core businesses such as Gemini large models, and splitting will weaken the advantage." ”
Don't just look at the chip, look at the winner of the "computing power ecology"
For U.S. stock investors:
Nvidia (NVDA): Still the first choice, the deepest ecological moat, difficult to shake in the short term; AMD (AMD): High risk and high elasticity, suitable for increasing the penetration rate of gaming AI, but it is necessary to pay close attention to the actual usage rate of customers such as Crusoe; Google (GOOGL): TPU progress can be used as a long-term observation indicator, but its short-term contribution to profits is limited.
Mapping of A-shares/Hong Kong stocks:
Pay attention to domestic AI chip + server manufacturers (such as Cambrian, Haiguang Information, Zhongke Shuguang) - if the supply of American chips is unstable, the logic of domestic substitution will be strengthened; If data center operators can access multiple computing power (including TPU and MI300), they may become new cooperation targets.
Risk warning:
If AI capital expenditure cools down in the second half of 2026, AMD's "bottom-up model" may backlash into cash flow; If Google fails to break through the capacity bottleneck, TPU's market share may remain in single digits for a long time.





