
Nvidia recently made a big move in Silicon Valley: spending about $20 billion to acquire the core technology and team of AI chip startup Groq.
However, the deal is not a traditional "acquisition", but a more ingenious approach - Nvidia obtains a non-exclusive license to Groq technology (which means that Groq can also be licensed to others), and at the same time poaches the entire team of Groq's founders and executives.
Why do this? There are two main reasons:
Enhanced technical advantages: Groq has a very efficient AI inference chip design that can run AI applications faster and cheaper. Nvidia wants to consolidate its leading position in AI computing.
Bypassing antitrust scrutiny: Direct acquisition of Groq may be targeted by regulators, but the integration can be completed more smoothly through the combination of "technology licensing + talent hiring".
It is worth mentioning that the price of $20 billion is almost three times that of Groq's valuation a few months ago ($6.9 billion), which shows how much Nvidia values this technology.
It not only makes up for shortcomings, but also prevents opponents
Nvidia CEO Jensen Huang made it clear in an internal email: This time he spent a lot of money to win Groq to make up for his key shortcomings in AI "reasoning".
What is "reasoning"? Why is it important?
Training: It is to teach AI models to "learn knowledge", such as training a large model with massive data - Nvidia is already the absolute leader in this regard.
Reasoning: It is to let AI "work", for example, if you ask a chatbot a question, it will give a quick answer - this requires a cheap chip, power saving, and fast response.
But here's the problem: Nvidia's current GPUs, while powerful, are too expensive, too large, and have high power consumption, making them not suitable for large-scale deployment of inference tasks. The market is in dire need of more efficient and low-cost solutions.
Groq's technology is just the right medicine, Groq specializes in low-latency, high-efficiency AI inference chips, and on some tasks, the processing speed is even faster than Nvidia.
While its first generation hasn't really threatened Nvidia yet, the second and third generations are coming soon – which keeps Nvidia sitting still. Chip analyst Dylan Patel pointed out that Nvidia may have seen the potential of Groq's next-generation technology and simply "incorporated" it in advance to grab both technology and talent.
Why not buy directly? But "authorization + poaching"?
This is a common operation of tech giants in recent years - Microsoft, Amazon, Google have all done this. The advantages are: avoiding strict antitrust scrutiny (not buying the company, only buying technology and team); Quickly integrate core capabilities without carrying the burden of the entire company.
| Groq's dilemma: good technology, but difficult to land
Although AI chip companies like Groq have received billions of dollars in investment, it is still very difficult to truly shake Nvidia's dominance in the high-end AI chip market.
Why? The key is that Nvidia doesn't just sell chips, but also creates a strong "ecosystem" - especially its CUDA programming platform.
The vast majority of AI developers in the world use CUDA to write code, and they have to rewrite the program to change to another chip, which is expensive and troublesome. This makes it difficult for customers to leave Nvidia, forming a strong "stickiness".
Groq, despite its technical bright spots, has not had a good time lately: it has cut its 2025 revenue forecast by three-quarters. The reason: the lack of data center resources where chips are planned to be deployed has delayed orders until next year.
You know, Groq also confidently predicted in July this year that cloud business revenue will exceed $40 million, and total annual sales will exceed $500 million.
Now that the target has been lowered sharply, it also shows that challenging giants is far more difficult than imagined. The competition is heating up, but the paths are different. Although Nvidia is dominant, the opponents are not idle:
Google: Its own TPU chip is getting stronger and stronger, and even Apple and Anthropic use it to train large models; Meta and OpenAI: simply develop their own inference chips and want to reduce their dependence on Nvidia; Other chip companies: are also accelerating integration - Intel is talking about acquiring SambaNova, Meta has bought Rivos, and AMD has absorbed the team at Untether AI.
Jingtai view|Short-term positive, long-term vigilance
For Nvidia (NVDA):
Short-term: Strengthen the reasoning layout, consolidate the "AI full-stack" narrative, and the stock price may be boosted;
Medium-term: The impact of 20 billion expenditures is limited (60 billion in cash + 20 billion in quarterly profit), but reflects increased defensiveness;
Long-term: If the ecological closure triggers the collective "de-Nvidiaization" of customers, the growth ceiling will appear.
Competitive landscape for AI chips:
AMD, Intel: Accelerate self-research + mergers and acquisitions, or become alternative beneficiaries;
Domestic computing power (Huawei Ascend, Cambrian, etc.): Global customers' willingness to seek a "second supplier" has increased, and the domestic subsistence window has opened.
For the primary market:
Pure hardware AI chip startups are becoming more difficult to raise funds - either they are "authorized to be collected" by giants, or they are trapped in the quagmire of commercialization; Future opportunities lie in software and hardware collaboration, vertical scenarios, and open source ecology.





