05/08 2026
370

In the early hours of Thursday, Anthropic announced a computing power cooperation agreement with SpaceX, securing access to over 300 megawatts of computing capacity. Yet, the strategic reasoning behind this move remains largely overlooked by many observers.
Developers were the first to notice that usage limits for Claude had doubled, and peak-period restrictions were lifted. While this was certainly good news, the truly significant development lay elsewhere: xAI’s previously accumulated 550,000 GPUs were operating at a mere 11% utilization. At the same time, Bloomberg reported that SpaceX plans to invest up to $119 billion to construct a 2-nanometer chip factory, codenamed Terafab. Shortly after, Musk confirmed that xAI would be dissolved and merged into SpaceX to form SpaceXAI.
These three concurrent developments point in a unified direction.
Over the past two years, competition among large AI models has resembled an intellectual contest among algorithmic geniuses, with new records set in parameter scale, benchmark scores, and multimodal capabilities. However, as we enter 2026, the underlying logic has shifted. After pushing computing power to the limits of physics and engineering, competition is rapidly descending from the software algorithm layer into a heavy industry battle involving energy, hardware scheduling, and foundational semiconductor manufacturing.
In short, Anthropic’s partnership with SpaceX signifies that large model warfare has officially entered the ‘Heavy Industry Era.’
The so-called ‘AI heavy industry’ means that competition is now firmly anchored in three robust domains: energy efficiency, hardware scheduling efficiency, and foundational semiconductor manufacturing capabilities. Their common trait is that they are not governed by Moore’s Law, nor can they be quickly resolved through financing or talent poaching. They require time, land, electricity, water, and decades of process refinement.
01
11%
To understand the rationale behind Musk’s series of aggressive moves, we must start with the curious figure of 11%.
Both domestic and overseas AI companies frequently complain about insufficient computing power. Yet xAI, with its Grok technology, possesses a massive amount of idle computing capacity.
Over the past year, xAI has pursued AI infrastructure development at a breakneck pace. Its Colossus supercomputing cluster in Memphis completed its initial phase in just 19 days, amassing approximately 550,000 NVIDIA H100 and H200 GPUs. On the global computing power reserve rankings, this represents an overwhelmingly dominant position.
However, acquiring computing power does not equate to effectively utilizing it.
The core metric for measuring AI computing efficiency is MFU (Model Floating-Point Utilization). xAI’s MFU stands at a mere 11%. This means that while the cluster theoretically generates 100 units of training throughput, 89 units are wasted.
This is certainly not the fault of Musk or his elite team but rather reflects a universal technological gap in the AI infrastructure field: software stack and network communication bottlenecks in hyperscale clusters. When GPU counts are limited, the issues are manageable. However, once the scale reaches tens or even hundreds of thousands, system complexity begins to climb exponentially. Data synchronization between cards, network latency, faulty node recovery, and data read/write delays all consume time. Even deploying top-tier InfiniBand networks proves ineffective.
This is also why domestic companies like Moonshot AI and DeepSeek, along with overseas firms like Google, have persisted in investing in foundational architectures for years. Without deep optimization capabilities at the foundational level, piled-up hardware becomes nothing more than power-consuming scrap iron.
For a capital-intensive model, low utilization rates mean the hardware payback period is infinitely prolonged while technological obsolescence accelerates. This is devastating. Tens of thousands of top-tier GPUs, not to mention their high procurement and depreciation costs, generate astronomical operational losses just from liquid cooling and power systems alone.
This ‘computing power indigestion’ caused by mega-clusters forces Musk to rapidly adjust his business model.
02
Transformation
Faced with such massive idle assets, divesting and monetizing a portion of the computing power naturally became the most commercially logical damage control strategy. This also serves as the fundamental backdrop for Anthropic’s 220,000-GPU cooperation agreement with SpaceXAI.
Ironically, the relationship between the two partners was previously far from amicable. On xAI’s platform, Musk’s last mention of Anthropic was a scathing attack. However, in the face of commercial interests, the two sides instantly buried the hatchet.
For Anthropic, this represents a strategic lifeline amid prolonged drought. Previously constrained by computing power shortages, it had resorted to dumbing down its models, throttling speeds, and even canceling low-tier subscription plans. As a global first-tier model company competing with OpenAI, Anthropic—despite receiving funding and computing support from Amazon and Google—still desperately craves independent, large-scale top-tier computing power for pursuing higher-order models.
After securing full access to Colossus 1’s computing power, Anthropic’s response was exceptionally pragmatic and swift. Claude Code limits doubled, peak-period restrictions lifted, and Opus model API quotas significantly increased. For both B2B and B2C markets, computing power directly translates into enhanced user experience—a potent tool for seizing market share from OpenAI.
For Musk, this deal is equally shrewd.
By leasing GPU clusters to Anthropic, SpaceXAI effectively assumes a role similar to AWS and Microsoft Azure as an underlying cloud service provider. This move not only effectively hedges against hardware depreciation and idle costs but also transforms cutting-edge AI computing power into a stable cash-flow infrastructure business. Before his Grok team’s software capabilities catch up, having top-tier hardware operate at full capacity under others’ management is far more rational than letting it idle in his own data centers.
Moreover, Musk understands clearly that supporting Anthropic objectively provides strong counterbalance to his long-time legal adversary, OpenAI.
03
To the Stars
Dissolving xAI into SpaceX reflects a shift in Musk’s cognitive framework.
Cutting-edge AI R&D has evolved from a pure software and algorithm proposition into a massive systems engineering challenge. Scaling laws remain in effect, with global large model parameter counts approaching the 10-trillion level, driving training and inference computing power requirements to geometric growth. Progress in Earth’s most advanced sciences now demands extreme squeezing of physical resources—electricity, land, even water.
Currently, multiple U.S. tech giants fret over data center electricity supplies. Anthropic even pledged in its official announcement to ‘cover U.S. consumer electricity price hikes caused by its data centers.’ AI’s energy consumption has reached a level that triggers sensitivity around social public resources.
SpaceXAI, the earliest mover in space-based computing power, stated bluntly in its announcement: The computing power required to train and run next-generation systems is exceeding what terrestrial electricity, land, and cooling systems can sustain within effective timeframes.
Integrating xAI into SpaceX fundamentally aims to leverage SpaceX’s unparalleled global engineering integration and aerospace transportation capabilities. Space offers boundless solar energy, while the vacuum environment combined with extremely low temperatures on the dark side of orbit directly solves Earth’s most vexing data center cooling problems. SpaceX’s Starship program, with its low-cost, high-frequency, massive orbital delivery capabilities, naturally becomes the only realistic foundation for realizing this vision.
Dissolving independent xAI is not a retreat but rather a deep physical and organizational binding of AI’s foundational infrastructure with aerospace engineering.
04
Closed Loop
Lending computing power to alleviate immediate financial pressures and exploring space-based computing power to bet on future physical space represent only part of the picture.
The Terafab wafer fab project represents Musk’s ultimate attempt to achieve vertical integration across the entire upstream technology supply chain.
The core reason global AI companies complain about computing power shortages lies in how a handful of companies throttle the entire supply chain: NVIDIA controls chip design and the CUDA ecosystem, while TSMC monopolizes advanced manufacturing capacity. Musk’s semiconductor demands are nearly all-encompassing. Beyond AI model training, Tesla’s autonomous driving, Optimus humanoid robots, and SpaceX spacecraft all require massive volumes of customized advanced chips. Relying on external suppliers means enduring prolonged delivery cycles and exorbitant premiums.
Terafab, initially planned with $55 billion in investment and now reaching up to $119 billion, targets 2-nanometer processes and plans annual computing capacity equivalent to 1 terawatt of electricity. While signaling NVIDIA, Musk directly challenges TSMC’s foundry dominance.
However, the industry remains skeptical. Money can buy ASML lithography machines and Applied Materials equipment, but that’s just the first step. Yield ramp-up and process refinement typically require decades of accumulation. Nevertheless, this ‘first principles’-driven vertical integration attempt exposes the fierce determination among AI giants to break free from existing semiconductor supply chain constraints.
NVIDIA CEO Jensen Huang remarked while congratulating the SpaceX-Anthropic partnership: ‘The future of AI is NVIDIA.’ Musk’s series of moves suggest a different narrative: Only by simultaneously mastering efficient hardware scheduling, securing cheap and sustainable energy, and achieving autonomous control over foundational chip manufacturing can one dominate the AGI cosmos.
Computing power has become the oil of the new era. Mere hoarders lacking strong software digestion capabilities will ultimately be devoured by soaring costs. The continuous evolution of AI models is forcing infrastructure to break free from Earth’s resource constraints and reshape global semiconductor manufacturing landscapes. While algorithms remain crucial, before the laws of physics and commerce, solid heavy industry forms the bedrock supporting AI development.