Home > MarketWatch > Industry News
"Lobster" eats memory like crazy, and DRAM ushers in an unexpected "blood recovery"?
Time:2026-04-05

26523509-DmBGva.jpg?auth_key=1775404799-

Morgan Stanley pointed out in its latest report in March that after AI enters the "executive era", the demand for ordinary memory (DRAM) will far exceed that of high-end video memory (HBM), and may even make DRAM the most in-demand chip in the entire AI infrastructure. This means that the boom cycle of the memory industry will be longer and stronger than everyone expected.


Agentic AI, represented by OpenClaw, is enabling AI to truly "perform tasks" such as controlling robots, operating software, and automating workflows. This shift is completely changing the chip demand landscape.


01


|“ AI that "does things" eats more memory than AI that "thinks about things"

The core point of the Morgan Stanley report is one sentence: "Doing things" requires more memory (DRAM) than "thinking about things". It sounds simple, but behind it is a fundamental shift in the way AI is used.


In the past, large models (such as chatbots like ChatGPT) were mainly "thinking about things", you asked a question, and it used the GPU to quickly calculate the answer, and the whole process was dominated by the GPU. At this time, memory (DRAM) is only a supporting role, responsible for temporarily storing data, and the real bottleneck is the GPU computing power.


But now, a new generation of AI agents (like OpenClaw) is starting to "do things." It doesn't just answer questions, it actually gets the job done: logging in to more than 50 platforms like WhatsApp, Telegram, Slack, etc. at the same time; Automatically search the Internet, read files, run code, call APIs, operate computers, and finally integrate multi-step operations into a complete result for you.


This "executive AI" works in a completely different way: it is no longer a one-time reasoning, but a series of steps coordinated and scheduled; Many tasks actually run on the CPU, and the CPU often takes longer than the GPU. More importantly, it constantly remembers intermediate results, shares context, and accesses the cache—all supported by memory.


The result is that memory has changed from a "logistics role" in the past to a new bottleneck for the entire system. The more AI can "work", the greater the demand for DRAM. This also explains why memory prices are soaring, and the shortage is likely to last longer.


02


| OpenClaw: How fierce the "work" of AI is, how ruthless the memory demand is

Morgan Stanley has conducted a detailed analysis of the popular agent tool OpenClaw, and the conclusion is clear: memory (DRAM) is the number one bottleneck in this type of AI, and other hardware has to step aside.


OpenClaw has two uses, but either one is extremely memory-dependent:

1. Lightweight mode (call external large models, such as GPT-4 or Claude)

Even if it doesn't run the local model, it just acts as an "intelligent transit station", it also eats memory - because the underlying layer uses Node.js, which itself takes up a lot of memory.


Minimum requirement: 2GB RAM, want to run stably? A minimum of 4GB is recommended. At this time, the bottleneck is no longer the GPU or CPU, but whether the system memory is sufficient.


2. Local mode (run AI models directly on the computer)

This is the "memory killer". Not only system memory, but also video memory (graphics DRAM): system memory (DRAM) starts at 32GB, running 7 billion ~ 8 billion parameters for small models: requires additional 8GB of video memory, running 13 billion ~ 70 billion parameters for medium and large models: requires 16–24GB video memory, running large models such as Llama 370B and Qwen 72B: video memory is required 80GB or more!


More importantly: not enough memory, not slowing down, but crashing directly. The program will report an error "heap out of memory", and it will not be installed or run at all.


03


Memory prices are accelerating, and the market has reached the "halftime"

The explosion of demand brought about by AI agents has really pushed up the prices of memory and memory chips.


DRAM (Operating Memory):

In the second quarter of 2026, the spot transaction price of DDR5 memory for servers has skyrocketed by 50% compared to the previous quarter, and large cloud vendors (especially Chinese vendors) are willing to pay higher prices to grab them.

By the end of February, the contract price of a 64GB server memory stick (RDIMM) had risen to $910–920, about 20% higher than the Q1 average ($800).


LPDDR memory for mobile phones and consumer electronics is also expected to rise by 40%–50% in the second quarter.

Even HBM3E (high-end AI video memory), which was originally expected to be reduced in price, has now turned slightly up.


NAND (Memory Chip):

Enterprise SSD prices are expected to increase by 40%–50% month-on-month in the second quarter; Consumer-grade products (such as SSDs for laptops) may increase by more than 60%, and in some cases of shortage, the price of eSSD may even double.


Morgan Stanley believes that this round of price increases has not yet peaked - it is still in the middle of the upward cycle.


With capacity tightness far exceeding expectations, many companies' profit forecasts remain stuck on old data. Once the market realizes how tight the reality is, the stock prices and profits of related chip companies are expected to rise sharply, bringing obvious investment opportunities.


04


Bet on memory leaders and lay out server ecology

对半导体股投资者:

SK hynix: 2026-2027 EPS forecast raised by 24%/32%, target price of 1.3 million won (+43% space); Samsung Electronics: The target price rose to 251,000 won, maintaining "overweight"; Micron (MU): Although not highlighted in the report, the global DRAM giants have benefited simultaneously; Domestic substitution concerns: If Changxin Storage, GigaDevice Innovation, etc. can cut into the server supply chain, the elasticity is huge.


Mapping of the science and technology industry chain:

High-core CPU manufacturers: AMD (EPYC accounts for more than 40%), Intel (demand for Sapphire Rapids surges); Server OEMs: Inspur, Ningchang, and Supermicro orders are expected to explode; Rising cost pressure on cloud vendors: Alibaba Cloud, Tencent Cloud, and AWS may accelerate price increases to pass on costs.


Risk warning:

If the popularity of AI agents is slower than expected, memory demand may fall in stages; The expansion cycle of memory chips is about 12-18 months, and if production capacity is released in 2027, it may trigger a price correction.


TEL:
18117862238
Email:yumiao@jt-capital.com.cn
Address:20th floor, Taihe · international financial center, high tech Zone, Chengdu

Copyright © 2021 jt-capital.com.cn All Rights Reserved 

Copyright: JamThame capital 粤ICP备2022003949号-1  

LINKS

Copyright © 2021 jt-capital.com.cn All Rights Reserved 

Copyright: JamThame capital 粤ICP备2022003949号-1