Price Hikes! Price Hikes! Price Hikes! A Global Computing Power and Storage 'Earthquake' Triggered by AI

03/19 2026 463

By 2026, artificial intelligence has evolved from a cutting-edge technology in laboratories to an integral part of daily operations across countless industries: companies use AI for data analysis, intelligent customer service, and automated production; individuals use AI for writing copy, editing images, and planning; governments use AI to optimize government services, urban management, and public safety.

Every AI interaction, computation, and data processing relies on computing power and storage. Computing power is the 'brain' of AI, responsible for processing data and running models; storage is the 'memory bank' of AI, responsible for storing data, model parameters, and operational logs.

For more than a decade, cloud computing services have followed the principle of 'getting cheaper the more you use,' while storage chips have consistently dropped in price due to technological advancements and increased production capacity. Both businesses and individuals have grown accustomed to low-cost cloud services and digital products.

However, starting in the second half of 2025, the global market suddenly saw a reversal: mainstream cloud providers such as Amazon AWS, Microsoft Azure, Alibaba Cloud, and Tencent Cloud raised prices for AI computing power, cloud storage, and server leasing, with some products seeing increases of up to 34%. Storage chip giants like Samsung, SK Hynix, and Micron followed suit, with prices for DRAM memory, NAND flash memory, and HBM high-bandwidth memory skyrocketing. Some storage chips saw price increases exceeding 500% within six months, driving up the costs of memory and hard drives for everyday smartphones and computers.

This sudden wave of price hikes is not a short-term market fluctuation or malicious price gouging by manufacturers but a structural transformation of the industry chain triggered by the explosive global demand for AI computing power. AI is like a 'super resource hog,' voraciously consuming computing power and storage resources, leading to a supply shortage in upstream hardware production capacity and soaring costs for midstream cloud services, ultimately affecting the entire market.

Explosive Demand for AI Computing Power: Why Is the World 'Competing for Computing Power'?

Computing power, simply put, is the ability of a computer to process data and run programs, akin to the thinking speed of the human brain—the faster the brain works, the more efficient it is at solving problems.

Traditional computing power is primarily used for basic operations like daily office tasks, web browsing, and video playback. In contrast, AI computing power is high-performance computing designed specifically for training and inferencing artificial intelligence models. It requires processing vast amounts of text, image, audio, and video data, running models with trillions of parameters, and demands computational speeds, data transfer rates, and hardware performance tens or even hundreds of times greater than traditional computing power.

To draw an analogy, traditional computing power is like a family sedan, suitable for daily commuting; AI computing power is like an F1 race car, requiring extreme speed and performance to handle 'extreme tasks' such as training large AI models and processing multimodal data.

According to data from authoritative institutions such as the China Academy of Information and Communications Technology (CAICT), IDC, and Inspur Information, the global demand for AI computing power in 2026 shows explosive growth, with key data points as follows: Global Computing Power Scale: In 2026, the total global computing power scale reached 12.8 EFLOPS, a year-on-year increase of 58%. Intelligent computing power accounted for over 80%, officially surpassing general-purpose computing power to become the absolute dominant force in global computing power, marking the industry's full entry into the 'era of intelligent computing power.'

China's intelligent computing power scale reached 1,590 EFLOPS, a year-on-year increase of 72%, the highest growth rate globally. However, there remains a 25%-30% gap in high-end computing power resources, with public intelligent computing centers operating at near-full capacity. Global AI server shipments are expected to reach 3.2 million units in 2026, a year-on-year increase of 35%. The domestic AI server market size will reach RMB 350 billion, a year-on-year increase of over 40%, with the computing power demand of a single AI server being 8-10 times that of a traditional server.

While AI computing power was previously mainly used for large model training, by 2026, AI applications will be fully implemented, with inferencing computing power accounting for over 70% of the demand, becoming the core of computing power needs. Every user interaction with AI agents, AI assistants, and industry-specific AI applications requires continuous consumption of inferencing computing power, leading to exponential growth in Token (the smallest unit of data processed by AI) usage. In February 2026, China's weekly AI model Token calls exceeded 5 trillion, surpassing the United States for the first time.

Global AI-related spending is expected to reach USD 480 billion in 2026, a year-on-year increase of 33%. The total capital expenditure of the world's top eight cloud providers will exceed USD 600 billion, a year-on-year increase of 40%, all allocated to the construction of AI computing centers, servers, and storage devices.

Jensen Huang, CEO of NVIDIA, predicted at the 2026 GTC conference that by 2027, the global market demand for AI computing power-related products and services will reach USD 1 trillion, doubling the 2026 forecast and underscoring the frenzied demand for AI computing power.

Since the advent of ChatGPT in 2023, global tech giants and AI companies have been embroiled in a 'large model arms race.' Companies like OpenAI, Google, Microsoft, Baidu, Alibaba, and ByteDance have continuously launched larger and more powerful models, scaling from hundred-billion-parameter to trillion-parameter and even ten-trillion-parameter models.

Training a trillion-parameter model requires tens of thousands of high-end AI chips, months of continuous operation, and hundreds of millions of kilowatt-hours of electricity, with computing power demands tens of thousands of times greater than traditional software. Moreover, large models require continuous iteration and optimization, meaning the training process is not a one-time event but a long-term, repetitive cycle, further driving up computing power demand.

If large model training represents a 'one-time computing power explosion,' then the implementation of AI applications signifies sustained computing power consumption. By 2026, AI will have penetrated every industry: On the consumer side, AI chatbots, AI painting, AI writing, and AI office assistants will see billions of daily interactions. On the enterprise side, AI will replace manual labor in smart manufacturing, smart logistics, smart finance, and smart healthcare for data processing, process optimization, and quality inspection. At the industry level, autonomous driving, smart cities, and industrial internet will require real-time processing of massive sensor data.

These applications all demand real-time inferencing computing power, much like how smartphones constantly require battery power. AI applications constantly need computing power support, transforming computing power demand from a 'training need for a few enterprises' into a 'universal need for society as a whole.'

By 2026, AI will evolve from single-modal text processing to multimodal processing integrating text, images, audio, video, and 3D models, with the data volume processed in a single interaction being dozens of times greater than that of pure text. Simultaneously, the number of AI agents (Agent) will explode, with agents autonomously completing tasks, accessing data, and iterating. The Token consumption per task for agents will be 5-10 times that of traditional AI conversations, and even up to a hundred times in complex scenarios, further exacerbating the computing power shortage.

With explosive demand growth and supply unable to keep pace, the world faces a computing power shortage: Delivery cycles for high-end AI chips (NVIDIA H100, H200, AMD MI300) have extended to 6-12 months, making them unattainable even with money. Demand for racks in intelligent computing centers far exceeds supply, with AI computing resources at leading cloud providers consistently operating at full capacity, forcing new customers to wait in line. Small and medium-sized enterprises (SMEs) cannot afford to build their own computing power infrastructure and must rely on cloud services, further driving up demand for cloud computing power.

The computing power shortage has directly become the core trigger for price hikes in cloud services and storage chips—insufficient computing power necessitates the expansion of servers and storage, but with hardware costs soaring, price increases are inevitably passed on to the market.

The Underlying Logic Behind the Shift from 'Getting Cheaper with More Usage' to 'Across-the-Board Price Hikes'

From the second half of 2025 to 2026, the global cloud services market broke its decade-long tradition of 'only decreasing, never increasing' prices and entered a comprehensive price hike cycle.

Among overseas providers, Amazon AWS took the lead in raising prices for EC2 machine learning, cloud storage, and server leasing, with some AI computing power services seeing increases exceeding 20%. Microsoft Azure raised prices for AI inferencing and cloud databases by 15%-30%. Google Cloud followed suit, increasing prices for high-end computing power resources by over 25%.

Domestically, in March 2026, Alibaba Cloud announced price hikes for AI computing power, object storage, and GPU server leasing, with maximum increases reaching 34%. Tencent Cloud, Baidu Intelligent Cloud, and Huawei Cloud subsequently followed, adjusting prices for AI-related cloud services, with core computing power products generally seeing increases of 10%-25%.

In their price hike announcements, cloud providers cited the same reasons: explosive demand for AI computing power, soaring upstream hardware costs, and substantial increases in infrastructure investment. It is not that cloud providers want to raise prices but that cost pressures have become unbearable.

The three core reasons for cloud service price hikes are as follows: Soaring upstream hardware costs, with chip, storage, and server prices doubling. The core infrastructure of cloud services includes servers, AI chips, and storage devices, whose prices surged in 2025-2026: Prices for high-end GPU chips increased by over 50%, with supply falling short of demand, forcing cloud providers to compete for purchases at high prices. Prices for HBM high-bandwidth memory, DDR5 memory, and enterprise-grade SSDs rose by over 60% within six months, with some products exceeding 500% increases. The price of an AI server is 5-10 times that of a traditional server, with the procurement cost of just 32 64GB memory sticks for a single H100 server exceeding RMB 300,000.

Hardware costs account for over 60% of cloud providers' total investments. With hardware prices doubling, cloud providers' operational costs have skyrocketed, necessitating price hikes to cover costs.

To meet AI computing power demand, cloud providers must large-scale construct intelligent computing centers, whose construction costs are over 10 times those of traditional data centers: Traditional data center cabinets have a power capacity of 6-8kW, while AI intelligent computing center cabinets reach 50-120kW, 10-20 times higher. A 1GW computing power cluster consumes approximately 7,000 GWh of electricity annually, equivalent to the annual electricity consumption of a medium-sized city. AI chips, with their massive power consumption, require liquid cooling systems, whose procurement and operational costs are 3-5 times those of traditional air cooling. Intelligent computing centers demand larger sites and higher-bandwidth networks, with infrastructure investments reaching tens or even hundreds of billions of yuan.

In 2026, China's 'East Data, West Computing' project will see RMB 800 billion in new investments in computing power infrastructure, while global cloud providers' investments in intelligent computing centers will reach astronomical figures, all of which will ultimately be passed on to cloud service prices.

In the past, cloud services operated in a 'supply exceeds demand' market, with cloud providers relying on low prices to capture market share. However, by 2026, the explosive demand for AI computing power has transformed high-end computing power and storage resources from 'general production materials' into 'scarce strategic resources,' entering a 'seller's market.'

By raising prices, cloud providers can both alleviate cost pressures and optimize resource allocation, prioritizing scarce computing power resources for high-value clients to avoid waste. For example, Alibaba Cloud has allocated scarce AI computing power to Token-based businesses, ensuring computing power supply for core operations through price adjustments.

The costs of cloud service price hikes will ultimately be passed down through the market: AI companies will see soaring costs for large model training and inferencing, with small and medium-sized AI enterprises facing the risk of being 'crushed by computing power costs,' forcing some to reduce R&D and lower service quality. SMEs relying on cloud services for office and operations will see cloud service expenditures increase by 10%-30%, raising operational costs. Individual users will face higher prices for cloud storage, cloud services, and AI tool memberships, increasing personal usage costs. Enterprises in manufacturing, finance, and retail undergoing digital transformation will see increased cloud computing expenditures, raising digitalization costs.

The Global Storage 'Earthquake' Triggered by AI's 'Capacity Grab'

If cloud service price hikes are the 'surface phenomenon,' then storage chip price hikes are the core root of this computing power crisis. Storage chips are the 'data granaries' of AI—without them, even the most powerful computing power cannot process data or run models. From 2025-2026, the global storage chip market entered a 'super price hike cycle,' with DRAM, NAND, and HBM all seeing price increases, becoming the focus of the global tech industry.

According to data from TrendForce, CFM Flash Market, and SK Hynix, in the first quarter of 2026, contract prices for standard DRAM rose by 55%-70% quarter-on-quarter, exceeding 500% from June 2025 levels, with inventories sufficient for only about four weeks, at historically low levels. Prices for enterprise-grade and consumer-grade NAND rose simultaneously, with first-quarter increases of 40%-55%. Kioxia and Samsung had already sold out their 2026 NAND production capacity. Prices for HBM3 and HBM3E, specialized for AI servers, rose by over 100%, with SK Hynix and Samsung having already sold out their 2026 HBM production capacity, with orders booked through the end of 2027. SK Hynix's operating profit for fiscal year 2025 reached KRW 47.21 trillion, with a profit margin of 49%, a record high, making storage chip manufacturers the biggest beneficiaries of this price hike cycle.

The demand for storage chips in AI servers is growing exponentially compared to traditional servers. A single AI server requires 8-10 times more DRAM memory and three times more NAND flash memory than a traditional server. An NVIDIA H100 AI server requires 640GB of HBM, 2TB-4TB of DDR5, and 32TB-132TB of NAND, placing extremely high demands on the quantity and performance of storage chips. In 2026, global AI server shipments will exceed 3.2 million units, consuming 53% of the global monthly memory production capacity, directly squeezing storage capacity for traditional consumer electronics.

The explosive demand for AI servers has transformed storage chips from a 'supporting role' into a core strategic resource in the AI industry chain, with global storage giants prioritizing storage needs for AI servers.

The production capacity of storage chips is limited, requiring high-end wafer fabs and cleanrooms, with capacity expansion taking 2-3 years and unable to be quickly increased in the short term. Driven by the high profits of AI demand, the three major storage giants—Samsung, SK Hynix, and Micron—have proactively adjusted their production capacity structures: Shifting 70%-80% of their advanced production capacity to high-margin, AI-specific storage products such as HBM, DDR5, and enterprise-grade SSDs. Significantly reducing production capacity for traditional products like DDR4 and consumer-grade NAND, leading to supply shortages and price increases for traditional storage chips.

This 'capacity shift toward AI' strategy has directly caused a supply-demand imbalance across all categories of storage chips, with both AI-specific high-end storage and consumer-grade storage for smartphones and computers facing supply tensions.",

Storage chips are the 'basic staple' of the electronics industry, and the knock-on effects of price hikes ripple across the entire industrial chain. Consumer electronics price increases: Products such as mobile phones, computers, tablets, and solid-state drives have storage costs accounting for 10%-40% of their material costs. The price hikes of storage chips lead to increases in the prices of end-user products, making it more expensive for ordinary consumers to buy mobile phones and computers. Data center costs soar: The data centers of cloud service providers and internet companies require massive amounts of storage chips. The rise in storage costs directly drives up the prices of cloud services and servers.

Automotive electronics prices rise: Autonomous driving and intelligent vehicles require terabyte-scale storage chips. The price hikes of storage lead to increased costs for vehicle intelligence.

Accelerated localization of alternatives: The global storage shortage presents opportunities for Chinese storage manufacturers. Products from companies like Yangtze Memory Technologies (YMTC) and ChangXin Memory Technologies (CXMT) have shifted from being 'backup options' to 'viable choices,' leading to a rapid increase in the market share of domestically produced storage.

No one in the industrial chain, including businesses and individuals, can remain unaffected.

The explosion in AI computing power and the price hikes of cloud services and storage chips are not issues confined to a single industry. Instead, they represent transformations that the global technology industry, real economy, and ordinary individuals must all face, with impacts spanning upstream, midstream, and downstream sectors and covering all aspects of production, life, and consumption.

Upstream storage and chip manufacturers are entering a 'super boom cycle.' Storage chip and AI chip manufacturers have emerged as the biggest winners in this round of price hikes, experiencing explosive growth in revenue and profits and further consolidating industry concentration: Samsung, SK Hynix, and Micron monopolize the global HBM and high-end DRAM markets, wielding unprecedented pricing power. NVIDIA and AMD dominate the high-end AI chip market, with their market capitalizations continuing to climb. Semiconductor equipment and material manufacturers benefit from the expansion of storage and chip production capacities, with orders overflowing.

Meanwhile, the global semiconductor industrial chain is shifting from being 'consumer electronics-dominated' to **'AI computing power-dominated.'** Production capacity, technology, and capital are all tilting toward the AI sector, squeezing the traditional consumer electronics industrial chain.

Midstream cloud service providers are transitioning from 'low-price competition' to 'value competition.' Cloud service providers are bidding farewell to the era of 'low prices to capture market share' and entering **'a value competition phase centered on computing power, technology, and service'**: Leading cloud service providers are further increasing their market shares by leveraging their computing power resources and technological advantages, while smaller cloud service providers face elimination. Cloud service providers are accelerating the development of self-designed chips, self-designed storage, and intelligent computing centers to reduce their reliance on upstream hardware. Computing power leasing and MaaS (Model as a Service) have become new profit growth points for cloud service providers. By 2026, the Chinese computing power leasing market is expected to reach RMB 260 billion.

The AI application industry is undergoing a brutal shakeout. Leading AI companies, leveraging their financial and computing power advantages, continue to iterate their products and dominate the market.

Small and medium-sized AI companies, burdened by high computing power costs and resource shortages, are forced to exit the market or be acquired. The cost of digital transformation in traditional industries is rising, leading some companies to temporarily halt their AI initiatives and slow down their digitalization processes.

Large enterprises are increasing their investments in computing power to seize the AI initiative. Large technology companies and industry leaders, with the financial resources and capabilities to build their own computing power centers and secure storage production capacities, are turning AI computing power into a core competitive advantage. Internet giants, financial giants, and manufacturing giants are all building their own intelligent computing centers to reduce their reliance on cloud services. They are signing long-term contracts with storage and chip manufacturers to secure production capacities and control costs. They are accelerating the implementation of AI technologies to reduce costs and increase efficiency, offsetting the pressure of rising computing power and storage costs.

Small and medium-sized enterprises are the most direct victims of this round of price hikes. Their expenditures on cloud services and AI tools have increased by 10%-30%, driving up operational costs.

Unable to bear the costs of building their own computing power infrastructure, they can only scale back their AI usage or opt for low-cost alternatives. Some small and micro enterprises that rely on AI are forced to halt their AI-related businesses due to excessive costs.

Traditional industries such as manufacturing, retail, and agriculture, which originally relied on cloud computing and AI for digital transformation, now face significantly higher transformation costs. Expenditures on computing power and storage for intelligent manufacturing and intelligent logistics have increased, lengthening the investment return cycles. Some small and medium-sized enterprises are temporarily halting their digital transformation efforts, slowing down the industry's digitalization progress.

Digital product price increases: Products such as mobile phones, computers, solid-state drives, and USB drives have seen general price increases of 10%-30% due to the price hikes of storage chips, making it more expensive for ordinary people to buy digital products. Cloud service and AI tool price increases: Prices for cloud storage memberships, AI writing, AI painting, and office software memberships have risen, increasing the costs for individuals using digital services. Increased costs for life services: The AI operational costs of platforms such as food delivery, e-commerce, and transportation have risen, with some of these costs being passed on to consumers, leading to slight increases in the prices of life services.

Moreover, computing power has become a 'new dimension of national strength.' Countries worldwide have recognized that computing power is the national strength of the digital age and are elevating it to the level of national strategy. China is advancing the 'East Data West Computing' project and increasing investments in intelligent computing centers, domestically produced chips, and domestically produced storage. The United States has revised the CHIPS and Science Act, adding $200 billion in computing power R&D investments to support its domestic chip and storage industries. The European Union has released a computing power strategy aiming to achieve 70% self-sufficiency in computing power by 2030.

The 'de-risking' of the global technology industrial chain is accelerating. The shortages of storage, chips, and computing power have made countries worldwide aware of the importance of an autonomous and controllable industrial chain. China is accelerating the replacement of domestically produced storage, chips, and computing power, with companies such as Yangtze Memory Technologies, ChangXin Memory Technologies, and Huawei Ascend rapidly rising. The United States and the European Union are promoting the return of their industrial chains to reduce their reliance on overseas storage and chip production capacities. The global technology industrial chain is shifting from 'global division of labor' to 'regionalization and autonomization,' with its structure being completely reconfigured.

When will the price hikes end? Where are AI computing power and storage headed?

According to predictions from industry institutions and manufacturers, this round of cloud service and storage chip price hikes is expected to continue until the end of 2026 and gradually stabilize by 2027. The core basis for this judgment is as follows: From the second half of 2026 to 2027, the new production capacities of storage giants and chip manufacturers will be gradually released, and the production capacities of domestically produced storage and chips will rapidly increase, gradually alleviating the supply shortage.

On the demand side, by 2027, the growth rate of AI computing power demand will shift from 'exponential explosion' to 'steady growth.' The arms race for large models will become more rational, and the implementation of AI applications will enter a stable phase, slowing down the growth rate of demand. The inflection point for the prices of HBM and high-end storage is expected to occur in early 2028, while prices for traditional storage and cloud services will gradually stabilize in 2027 and no longer experience significant increases.

Manufacturers such as SK Hynix and Samsung have clearly stated that 2026 will be the peak for storage prices and that prices will gradually decline in 2027 as production capacities are released.

The future core formula for the AI industry is: AI capability = (computing power × storage capability × network capability) / power consumption. Computing power, storage, networking, and power will achieve coordinated development. The popularity of storage-computing integration technology will reduce data transmission and improve computing efficiency. Green power and liquid cooling technologies will become standard, addressing the bottleneck of computing power energy consumption. Computing power networks will enable cross-regional scheduling, breaking down 'computing power silos.'

China's storage, chip, and computing power industries will rapidly rise. From 2026 to 2028, the market share of domestically produced storage is expected to increase to over 25%. Domestically produced AI chips will achieve substitution in the mid-to-low-end market and gradually make breakthroughs in the high-end market. The global technology industrial chain will form a 'Sino-US dual-cycle' pattern, with autonomy and controllability becoming the core trend.

With production capacity expansion and technological innovation, AI computing power will become a universal infrastructure for society, just like water and electricity. Computing power costs will gradually decrease, enabling small and medium-sized enterprises and individuals to use AI computing power at a low cost. Computing power services will become more inclusive, driving the deep implementation of AI across various industries. Computing power and storage technologies will continue to iterate, with performance improvements and cost reductions supporting the continuous progress of AI technologies.

Inevitable transformations in the AI era: opportunities and challenges coexist

The explosion in global AI computing power demand, triggering collective price hikes in cloud services and storage chips, is not a short-term market fluctuation but an inevitable signal of the arrival of the artificial intelligence era. AI has become the core driving force for global technological, economic, and social development. Computing power and storage, as the 'infrastructure' of AI, are undergoing a fundamental transformation from 'supporting roles' to 'leading roles.'

This round of price hikes has brought short-term cost pressures to businesses and individuals but has also driven the upgrading of the global technology industry. Storage and chip manufacturers are accelerating technological innovation, cloud service providers are optimizing service efficiency, countries are increasing investments in core technology R&D, and the localization of alternatives is rapidly advancing. In the long run, this transformation will make AI computing power and storage more mature and inclusive, ultimately driving AI technologies into every industry and every household, allowing artificial intelligence to truly benefit humanity.

This is a challenge, but even more so, an opportunity. By seizing the window of opportunity (which can be translated as 'trend' or 'opportunity') of AI computing power transformation, optimizing costs, and deploying technologies, one can gain an early advantage in the AI era. For individuals, there is no need for excessive anxiety. As the industry matures, prices will eventually return to rational levels, and we will all enjoy the convenience and benefits brought by AI.

The curtain has risen on the AI era, and the transformation of computing power and storage has just begun. This restructuring of the global technology industry will profoundly influence the development landscape of the next decade or even two decades, and each of us is a witness and participant in this transformation.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.