03/05 2026
349
Foreword:
While Wall Street remains embroiled in debate over whether the [AI bubble has burst], Jensen Huang has provided the industry with a definitive answer through a record-breaking financial report.
Following the release of the financial results, Huang made a statement during the earnings call that is poised to define the next decade of the AI industry: In this new AI world, computing power is revenue. We have reached the turning point for AI agents.
Author | Fang Wensan
Image Source | Network
The Brutal Aesthetics in Financial Results: A Complete Reconfiguration of Growth Engines
When dissecting NVIDIA's financial report, the most striking aspect is not the new high in total revenue but the fundamental restructuring of its business portfolio and the absolute dominance of its growth engine.
The performance of the core growth segment can no longer be described as merely [exceeding expectations] but must be characterized as [revolutionary].
In Q4 of FY2026, NVIDIA's data center business achieved $62.3 billion in revenue, a 75% year-over-year increase, accounting for over 91% of total revenue.
This means that all other businesses—once the foundation of NVIDIA's success, including gaming, automotive, and professional visualization—now collectively contribute less than 10% of revenue.
Since the advent of ChatGPT, NVIDIA's data center business has expanded nearly 13-fold, achieving a 19-fold increase over the past four years, soaring from $3.3 billion in Q4 2022 to $62.3 billion today.
This quarter, NVIDIA's data center networking business revenue reached $10.98 billion, a staggering 263% year-over-year increase, becoming the most prominent growth curve in the financial report.
Behind this data lies a fundamental shift in customer demand. Cloud providers and AI enterprises are no longer merely purchasing GPU chips sporadically but are instead procuring entire cabinet-level cluster systems that include NVLink interconnect architecture, Spectrum-X Ethernet, and BlueField DPU.
NVIDIA's business has evolved from [selling computing power chips] to [selling ready-made AI superfactories].
What shocks the market even more is its supply chain control, with total supply-related commitments surging from $50.3 billion in Q3 to $95.2 billion, effectively locking down global high-end computing power production capacity for years to come with hard cash.
CFO Colette Kress stated bluntly that the company clearly foresees the Blackwell and Rubin product portfolios generating $500 billion in revenue between 2025 and 2026.
In stark contrast to the explosive growth of AI businesses is NVIDIA's strategic retreat from non-core businesses.
This quarter, the former core pillar—the gaming business—achieved $3.7 billion in revenue, a 47% year-over-year increase but a 13% quarter-over-quarter decline.
Kress also admitted that due to the global memory shortage, NVIDIA must prioritize AI processor production capacity. The gaming business will continue to face supply pressures in FY2027 and may even skip the release of a new generation of gaming GPUs.
Jensen Huang's AI Economics: The Underlying Logic of Computing Power = Revenue
After the financial results were released, the market's primary concern was: How long can the hundreds of billions in AI capital expenditures by cloud providers sustain? Is AI's high growth just a fleeting bubble?
Jensen Huang provided a resolute answer with a complete [AI economics] framework: As long as computing power continues to drive revenue growth, there is no need to worry about an AI bubble.
The core of this theory is a simple yet closed-loop business formula: Computing power = Token generation = Revenue growth.
Huang stated bluntly that in the new AI world, without computing power, Tokens cannot be generated; without Tokens, revenue cannot grow.
The hundreds of billions in capital expenditures flowing into AI will ultimately translate directly into corporate revenue rather than sunk costs.
Supporting this logic is Huang's firm industry judgment: The turning point for AI agents has arrived.
This marks NVIDIA's third major prediction of AI industry trends, following its shifts from traditional CPUs to GPU computing and from traditional machine learning to generative AI—both of which propelled NVIDIA to exponential growth.
This time, Huang believes the industry is racing full speed toward the AI agent era, which will bring even more astonishing demand for computing power than the previous two transitions.
Traditional generative AI involves users inputting a prompt once, with AI generating content once, resulting in pulsed, one-time computing power consumption.
AI agents, however, are systems capable of autonomous planning, tool invocation, and completing multi-step complex workflows, with Token consumption exponentially higher than traditional generative AI.
Every Token generated can be directly converted into productivity and ultimately monetized.
This represents a fundamental reversal in the AI industry's business logic: Previously, computing power was a cost item for enterprises, requiring constant evaluation of its cost-effectiveness.
Now, computing power is a production factor and the core engine of revenue growth. The more computing power invested, the more Tokens generated, the broader the commercial scenarios implemented, and the higher the revenue growth ceiling.
Huang stated that these enterprises are procuring [millions] of Blackwell and next-generation Vera Rubin GPUs to meet the exponentially growing computing power demands of the AI agent era.
From Chips to AI Factories: The Moat Extends Beyond GPUs
This financial report and next-generation product layout (layout) tell us: NVIDIA's true competitive moat has evolved from single-point chip performance to a system-level, ecosystem-level, and supply chain-level full-chain barrier.
The Vera Rubin platform, set to begin mass production in the second half of 2026, is the epitome of NVIDIA's barriers.
Each component is specifically designed for particular segments of AI factories, achieving deep rack-level synergy from the ground up.
The introduction of Vera Rubin essentially represents NVIDIA's dimensional upgrade strike against competitors.
While rivals may imitate a single GPU design or catch up in individual chip performance metrics, replicating this cabinet-level system—comprising six custom chips, a complete network interconnect system, liquid cooling, supply chain coordination, and software ecosystem adaptation—is virtually impossible.
Beyond hardware barriers, over 1.5 million AI models on Hugging Face currently run on NVIDIA's CUDA platform, with over 5 million developers worldwide deeply integrated into this ecosystem.
To reinforce this ecosystem, NVIDIA invested $17.5 billion in various AI startups in FY2026, even acquiring a stake in chip manufacturer Intel.
It spent $13 billion to acquire the core technology and team of Groq, a star enterprise in AI inference chips, precisely addressing its shortcomings in low-latency, high-concurrency inference scenarios.
A computing power cooperation agreement with OpenAI worth up to $100 billion is also nearing completion, firmly binding NVIDIA to the most critical model vendors of the AI era.
From chip design and cabinet-level system manufacturing to software ecosystem construction and full-industry-chain investment, NVIDIA has become the [infrastructure general contractor] of the AI industrial revolution and the [global computing power central power plant] of this era.
Undercurrents Beneath the Throne: Unavoidable Challenges
Despite NVIDIA's seemingly unassailable computing power throne, risks and challenges lurk within the lines of this financial report.
① Clients' [counterbalancing instinct] and the threat of self-developed alternatives: Amazon has already deployed its self-developed Trainium 2 chips at scale in its data centers, Google is heavily investing in TPUs and commercializing them, while Microsoft and Meta are accelerating their self-developed AI chip layout (layouts).
These former top clients are transforming into NVIDIA's future competitors.
Their core demand is clear: more computing power capacity and an urgent desire for a reliable second supply source to counterbalance NVIDIA's absolute dominance.
② Absence in the Chinese market and the rise of local competitors: The financial report shows that after the U.S. government granted H20 chip sales permits to China in August 2025, NVIDIA achieved only about $60 million in H20 revenue.
A small quantity of H200 chips approved for sale to China in February 2026 has yet to generate any revenue.
③ Structural risks from high client concentration: In FY2026, sales to a few key clients accounted for 36% of NVIDIA's revenue, further increasing from the previous fiscal year. Growth remains highly tied to top tech companies.
If capital expenditures by top cloud providers and AI enterprises slow down, NVIDIA's revenue growth will be directly impacted.
Meanwhile, global memory shortages continue to strain NVIDIA's production capacity allocation, while the gaming business's continuous contraction has left it without a crucial business segment to hedge against AI industry cycle fluctuations.
The Next Decade: From the Digital World to the Physical World
If AI agents represent NVIDIA's current growth engine, then Jensen Huang's true ambition is to extend NVIDIA's computing power hegemony from the digital world to the physical world.
During the earnings call, Huang explicitly stated that while the current wave is the explosion of AI agents, the next turning point will be the full-scale implementation of physical AI.
This means introducing AI agent systems into physical scenarios such as manufacturing, robotics, and autonomous driving, which harbor even greater opportunities than the digital world.
NVIDIA's layout (layout) began long ago. In early 2026, NVIDIA open-sourced the Alpamayo vision-language-action platform focused on advanced autonomous driving reasoning.
Targeting the L4 autonomous driving market set to explode in 2027, it aims to provide the underlying AI brain for all autonomous vehicles.
Simultaneously, NVIDIA is expanding its cooperation with industrial software giants such as Siemens, Cadence, and Synopsys.
It integrates its AI computing power infrastructure, Omniverse digital twin technology, and world models deeply into core industrial design and smart manufacturing software.
This means NVIDIA's computing power is penetrating from the digital world—supporting large model training and AI agent operations—into the physical world of automotive manufacturing, factory production, and robotic operations, becoming the underlying infrastructure for intelligent upgrades across the entire real economy.
Huang predicts that by 2029, global AI infrastructure investment will reach $3 trillion to $4 trillion annually, with most of this growth coming from the deep integration of AI with the real economy.
Conclusion:
For the entire AI industry, the dividend (dividends) of pure large model wrappers have peaked, and mere parameter competition has long since lost its meaning. The real opportunities lie in the enterprise-level implementation of AI agents and the deep integration of AI with the physical world and real economy.
The AI industrial revolution has only just begun. And NVIDIA has already built the most critical computing power factory for this revolution.
Partial References: Xinzhiyuan: 'NVIDIA's Record-Breaking Financial Results: Huang Defines the AI Agent Turning Point—Computing Power is a Money Printer'; Zimu AI: 'After Earning $120 Billion, Jensen Huang Announces Another Turning Point Has Arrived'; Phoenix Technology: 'NVIDIA Earns $2.2 Billion Daily, Annual Net Profit Exceeds Four Tencents'; TMTPost AGI: 'NVIDIA's Q4 Financial Results Crush Expectations, Releasing the Ultimate Signal: The AI Race Will Not End in the Cloud but in the Physical World'; DeepTech: 'NVIDIA's Revenue Surges 73% to $68.1 Billion in the Last Fiscal Quarter, Far Exceeding Industry Predictions'; Sohu Technology: 'NVIDIA's FY2026 Revenue Exceeds $1 Trillion for the First Time, Earning $25,000 Per Second'