01/07 2026
530

"Chip" Original Content — NO.75
A box of memory is comparable to a house in Shanghai.
Produced by I Chip Trend IC
ID I xinchaoIC
Image I AI-generated
Recently, the most surreal news in the tech world has been this: a 256GB memory module, with a single stick priced at over 40,000 yuan.
Someone did the math: a box of memory is comparable to a house in Shanghai. If it's a fully configured server packaging box, its total value can easily exceed 2 million or even 4 million yuan. In the current context of plummeting housing prices, this could indeed secure a small, old, and run-down apartment in Shanghai.
Why would a seemingly simple hardware component become so expensive? However, if the discussion stops at 'expensive,' the truly important part of this news will be overlooked. Because the question is not whether a memory module is worth 40,000 yuan, but rather: a memory module is being priced like 'land,' while most people still understand it as a 'computer component.'

When Memory Is No Longer a Component
But the Fuel Tank of Productivity
01
In the era of personal computers, memory modules were one of the least worthy components to be seriously discussed. You might agonize over CPU models, compare graphics card computing power, or weigh screen parameters endlessly. But for memory, most people's criterion boiled down to four words: 'good enough will do.' It was a mere accessory in configuration lists, a supporting actor in performance narratives. This understanding was fine in the past but is rapidly becoming obsolete in the AI era.
First, it's essential to dispel the illusion: this 40,000+ yuan memory module is not an ordinary DDR5 for your home computer but a 256GB DDR5 RDIMM—a type of server-specific memory. It features ECC error correction, registered buffers, and stricter stability and consistency requirements, designed for long-term operation under high loads. These technical characteristics do raise costs, but they alone cannot explain such a dramatic price surge.
The real premium logic stems from a clear supply-demand mismatch.
Right now, companies like OpenAI, Google, and Meta, which are crazy (fengkuang, meaning 'frantically') training large models, along with domestic participants in the 'hundred-model war,' are the most extreme demanders of high-capacity memory. In these scenarios, memory is no longer an 'auxiliary computing resource' but the critical channel determining whether computing power can be unleashed. For computing clusters composed of tens of thousands of H100 or H200 GPUs, memory resembles a 'fuel tank'—no matter how powerful the engine, it can only idle if the tank isn't big enough.
From a supply chain perspective, this 40,000-yuan memory module isn't the result of 'the entire industry raising prices together' but reflects highly concentrated pricing power. Currently, manufacturers capable of stably mass-producing high-capacity DDR5 RDIMMs remain highly concentrated globally. Server memory isn't a component where suppliers can be switched at will; once integrated into a platform's validation system, switching costs are extremely high, with supply stability and long-term consistency often prioritized over single-point performance. This structure causes price elasticity for high-end memory to be extremely (jidu, meaning 'extremely') amplified during rapid demand surges.
A more subtle factor lies in HBM (High-Bandwidth Memory) capacity occupation. Within the industry, a silent resource grab is underway: companies like Samsung and SK Hynix face a painful choice—whether to allocate limited wafer capacity (Wafer) and advanced packaging capacity (TSV process) to produce highly profitable HBM or high-capacity DDR5. The practical choice is direct: to meet NVIDIA GPU demands, manufacturers have had to switch significant portions of DDR5 production lines to HBM. This has led to a global structural shortage of high-density DDR5 memory particles, explaining why roughly half of this price surge over the past four months has been paid for 'scarcity.'
So, this isn't consumer electronics' greed but 'means of production' greed.
In this sense, memory is beginning to exhibit attributes highly similar to 'land.' Land is expensive not because of its inherent refinement but because it possesses three features: it cannot be replicated, it determines production ceilings, and whoever occupies it first can expand first.
When Basic Resources Skyrocket,
Does the Shadow of a Bubble Appear?
02
High-end memory is gradually meeting the characteristics of 'new land'—advanced processes and packaging technologies create invisible barriers to rapid capacity expansion, with yields, stability, and compatibility impossible to replicate quickly with capital. Greater memory capacity and bandwidth directly determine how 'large' models can grow, whether they can support longer contexts and more complex reasoning. When cloud providers and leading model companies begin locking resources in advance, memory is priced not by 'current demand' but by 'future fear.'
In the AI era, computing power isn't used for 'calculation' but for defining who can participate and who can only watch. When memory transforms from a configuration item into a barrier, its price inevitably detaches from common consumer experience and enters another narrative logic.
Because of this, discussions about the 40,000-yuan memory module quickly shifted from 'is it expensive?' to a more dangerous question: Is this the eve of a bubble?
History offers unsettling parallels. Almost every technological bubble, before truly bursting, has displayed a similar signal—basic resources surge first. This isn't because demand has fully materialized but because everyone is betting early on an unrealized future.
In today's context, the price curve of high-end memory does exhibit familiar traits: the surge has clearly exceeded downstream's short-term absorption capacity, with cloud providers and AI companies showing clear motives to stockpile, while model roadmaps remain rapidly evolving. If training paradigms shift, some seemingly rigid demands could collapse swiftly.
Because of this, many industry insiders cautiously argue that we're witnessing an acceleration of the Matthew effect at the computing power level: computing resources are rapidly concentrating at the top, and future AI innovation may no longer belong to garage-based geeks but only to 'infinite-budget' behemoths. When a basic resource starts being priced by 'faith,' history tends not to be gentle.
But another perspective cannot be ignored. Some argue that attributing memory price hikes to bubbles alone underestimates structural changes. Bubbles are characterized by 'no one using them,' yet the reality is the opposite: AI isn't a short-term fad but a productivity reconstruction. Memory, unlike GPUs, has a longer replacement cycle and stronger path dependency. Real demand doesn't stem from conceptual hype but from long-term consumption of stable computing power during scaled deployments. In this view, the issue today isn't 'is anyone using it' but rather—too many are using it.
These two judgments form a rare standoff, but what's truly noteworthy is their shared premise: memory is now implicitly regarded as an indispensable core resource in the computing power era. The disagreement lies not in its importance but in whether it's being overvalued prematurely. This debate itself signifies that we've entered a new phase—one where basic computing resources are no longer gentle (wenhe, meaning 'gentle') or inexpensive.
When Computing Power Is Priced Like Land,
The Real Barriers Are Just Emerging
03
Regardless of which judgment you lean toward, one reality is becoming clear: a 40,000-yuan memory module is essentially a 'computing power tax' levied at the dawn of the computing power era.
This cost won't directly appear in an ordinary consumer's shopping cart but will be distributed, passed on, and absorbed in more hidden ways—through cloud service pricing, model invocation costs, and startup survival curves. Headline players can absorb fluctuations through scale, but smaller teams must constantly compromise on model parameters, training frequency, and product boundaries. Computing power is shifting from a tool envisioned for 'technological democratization' into an increasingly clear dividing line.
From the perspective of domestic manufacturers, high-end server memory isn't a track that can be simply 'replaced.' Even entering the DDR5 era, domestic players entering mainstream server platform validation systems remain concentrated in mid-to-low capacity and specific application scenarios. This isn't a gap in single-point technology but a systemic competition involving process maturity, long-term stability, and platform trust. In other words, it's not about 'who can make it' but 'who can deliver consistently and be adopted long-term.'
Because of this, more realistic breakthroughs for domestic manufacturers lie in customized solutions for specific computing scenarios, deep collaboration with local computing platforms, and rediscovering structural opportunities in non-general server areas like edge computing and dedicated accelerators.
From a longer technological roadmap, the industry isn't without buffers. Whether through protocols like CXL to expand memory pools or through HBM and system-level packaging to redefine boundaries between 'computing and storage,' technological evolution constantly attempts to loosen this highly concentrated resource structure. However, at least within the visible time window, these solutions act more as pressure-relief valves than answers to disrupt pricing logic.
Returning to the widely cited phrase—'a box of memory is comparable to a house in Shanghai'—it's no longer just industry chatter but a clear alarm. It reminds us: we're entering an era where 'computing power prices everything.' In this era, gold may not be the hardest currency; high-bandwidth memory, top-tier GPUs, and stable computing power supplies are the oil of the new world.

For ordinary people, this 40,000-yuan memory module may never fit into your computer, but the AI models it trains are approaching your future work and life. This isn't merely a price hike; it's a tax levied by the era. Whether this represents a healthy structural rise or an overheating prelude remains too early to conclude. What's certain is that the computing power era is no longer gentle, and we're adapting to its true face.
This may be the most uncomfortable yet authentic part of this news. When computing power begins to be priced like land, it ceases to be a naturally inclusive resource. It will be contested, preemptively occupied, and used to erect barriers. We can certainly hope for technological progress to bring new alternatives and for capacity expansion to eventually ease tensions, but until then, one fact is unavoidable:
When computing power operates under 'land logic,' we've entered an era where nothing is cheap anymore.
Disclaimer:
1. This content is original to Chip Trend IC, with views and information provided for reference only and not constituting investment advice. All cited information comes from publicly available market sources, and we make no guarantees regarding its accuracy or completeness.
2. This article may not be reproduced, copied, published, or cited without permission. For reprints, please contact us.