Home > MarketWatch > Industry News
Broadcom's "10 billion big order" was exposed, and the mysterious customer was actually OpenAI?
Time:2025-09-13

26233634-Lfx5AX.jpg?auth_key=1757865599-

OpenAI is going to make its own chip!


According to the Financial Times, OpenAI has joined forces with Broadcom, a major American chip manufacturer, to jointly design a self-developed chip dedicated to artificial intelligence, and plans to start official mass production next year.


This means that OpenAI no longer only relies on other people's chips (such as Nvidia), but wants to build its own "AI engine". This step is a bit like Tesla's self-developed autonomous driving chip - in order to better match its own technology and reduce dependence on external suppliers. For OpenAI, mastering the core hardware could be a key step towards more powerful and efficient AI systems.


01


| The AI "arms race" has entered the 2.0 era

Broadcom CEO Hock Tan announced at the latest earnings conference: We have won the fourth major customer for custom AI chips, with an order value of $10 billion! And the customer is OpenAI. As soon as the news came out, the market was boiling. Stimulated by both this blockbuster news and strong earnings reports, Broadcom's stock price rose 4.5% in after-hours trading.


Investors vote with real money: "This is not just an order, this is another reshaping of the AI power landscape." ”


Interestingly, OpenAI did not make this chip to sell money, but for its own use - it was specifically used to run its own AI models, such as the next generation of GPT.


This trick is actually not new. Tech giants such as Google, Amazon, and Meta have long begun to design their own AI chips, with the aim of getting rid of dependence on Nvidia and making the hardware more suitable for their own AI needs. Now, OpenAI has officially joined the "self-developed chip club", which is determined to control the entire AI chain from software to hardware.


In the past, it was "who has the GPU, who has the computing power"; Now it is "who has self-developed chips, who has a future".


02


| On the battlefield of AI chips, Broadcom's "XPU" is coming

When it comes to AI chips, you may think of Nvidia's GPUs or AMD's Instinct series for the first time. But in 2025, a new term is quietly emerging: XPU — the code name given to Broadcom's custom AI chips. This is not just a simple name change, but a "dimensionality reduction strike" challenge to the AI computing power landscape.


What is XPU? Not "general", but "specialized".

26233634-P5QnCW.jpg?auth_key=1757865599-


To put it simply, the GPU is like a "universal wrench", which can screw anything, but it is not fast enough; XPU is like a "custom wrench", designed only for your screws, and it is tightened as soon as it is screwed. Broadcom's XPU is an AI computing power accelerator tailored for "super users" like OpenAI and Google.


Broadcom's stock price has risen by more than 30% this year, and many people think it is an "AI ride", but the real driving force is the market's crazy bet on the future of "custom chips". HSBC predicts that by 2026, the growth rate of Broadcom's custom chip business will far exceed that of NVIDIA's GPU business.


Broadcom has provided design support for Google TPU, Amazon Trainium, Meta MTIA, and now OpenAI XPU, and as more and more large companies want to have their own AI chips, Broadcom's customization business is becoming more and more prosperous.


At present, NVIDIA is still the "king" of AI hardware, H100/B200 is hard to find, and the CUDA ecosystem is unbreakable. But trends are changing.


26233634-4CFhyV.jpg?auth_key=1757865599-


In other words, if Nvidia is "paving the way", then Broadcom is "building high-speed rail". The former popularizes AI, and the latter accelerates AI. Nvidia is still strong, but its opponents are accelerating their catch-up, especially players like Broadcom, who can help large companies "privately customize", are quietly stealing more and more market share.


03


From "computing power consumer" to "computing power definer"

OpenAI CEO Sam Altman never hides one thing: computing power, the more the better.


Whether it's to serve more users using ChatGPT or to train more powerful AI models (such as the legendary GPT-5), massive computing power is required. OpenAI has always been the most "edible" player in this regard.


In fact, OpenAI is one of the earliest "big customers" of Nvidia's AI chips, and has bought a large number of NVIDIA GPUs over the years, which can be called a "computing power maniac". Just last month, Altman also revealed that due to the skyrocketing demand for new models, the company is making computing power a top priority and plans to "double the size of the computing cluster in the next five months."


But here's the problem: just buying other people's chips will sooner or later encounter bottlenecks - high prices, tight supply, and not necessarily fully adapted to their own AI.


Therefore, OpenAI decided to do it itself and develop its own chips. The core purpose of this move is clear: to solve the "infinite thirst" for computing power from the root. This is the deep logic of its alliance with Broadcom to create XPU: it's not that you don't want to buy Nvidia, but that you have to control the lifeblood of your computing power. No longer just a "consumer", but also a "definer".


04


Jingtai Observation|See clearly the three major opportunities behind "computing power sovereignty"

1. Keep an eye on Nvidia, but understand its "ceiling"

Nvidia is still the current "hydropower company" of AI computing power, which cannot be replaced in the short term; However, in the long run, major customers' self-developed chips will gradually divert their high-end GPU demand; Investment strategy: Mainly hold, avoid heavy positions, and be wary of "expected overdrafts".


2. Focus on the layout of the "computing power infrastructure chain", especially the customized track

OpenAI's expansion of clusters and chip manufacturing will drive the entire industrial chain. Custom Chip Design: Broadcom (AVGO) – Direct Benefit; Advanced processes: Taiwan Semiconductor Manufacturing Company (TSM) - the "manufacturing base" of XPU; High-speed interconnection: Marvell (MRVL), Coherent (COHR) - "fast connection and fast transmission" between chips; Liquid cooling and power supply: The "air conditioning + heart" of AI data centers is exploding at the same time.


3. Pay attention to the new "Compute-as-a-Service" model

In the future, OpenAI may no longer just "sell APIs", but "sell computing power + models + tools" integrated services.



TEL:
18117862238
Email:yumiao@jt-capital.com.cn
Address:20th floor, Taihe · international financial center, high tech Zone, Chengdu

Copyright © 2021 jt-capital.com.cn All Rights Reserved 

Copyright: JamThame capital 粤ICP备2022003949号-1  

LINKS

Copyright © 2021 jt-capital.com.cn All Rights Reserved 

Copyright: JamThame capital 粤ICP备2022003949号-1