01/22 2026
485
Musk made a succinct announcement on social media: 'The Grok-dedicated Colossus 2 supercomputer is now operational.' This heralds the official arrival of the world's first AI training cluster boasting a single-node computing capacity that reaches the gigawatt threshold.
This silicon-powered giant, with its staggering power consumption, maintains a continuous power load of 1GW, surpassing the peak power demand of San Francisco, USA, and matching the energy consumption of a substantial power plant or a large-scale industrial manufacturing facility.
From the groundbreaking ceremony to full-scale operation, Colossus 2 was completed in a mere one and a half years.
What does a continuous power load of 1GW signify? It has already outstripped the peak power consumption of San Francisco, aligning with the energy demands of a sizable power plant or an extensive industrial manufacturing base. Musk disclosed that the appetite of this computing behemoth is still escalating and is set to be further upgraded to 1.5 GW by April of this year.
Colossus 2 boasts a computing power equivalent to 1.4 million H100 GPUs and incorporates a fully liquid-cooled design. xAI accomplished this multi-billion-dollar project in just six months, showcasing an impressive execution velocity.
Contrary to OpenAI's reliance on Microsoft Azure and Anthropic's dependence on Amazon AWS, xAI has opted for a fully self-built infrastructure approach.
This vertical integration strategy endows xAI with significant strategic autonomy. Through self-construction, xAI can tailor-design facilities from the ground up to meet computing load requirements, rather than retrofitting existing data center architectures in a one-size-fits-all approach.
On the satellite imagery of the Colossus 2 campus, the rooftop of one building is emblazoned with the words 'MacroHard.' This reflects both Musk's playful humor and his unabashed ambition to challenge established software behemoths like Microsoft.
Musk has openly declared that, given software companies do not manufacture physical hardware, AI could theoretically be used to simulate and potentially replace them.
Musk's ambitions in the computing power race extend well beyond this milestone. He has boldly asserted that within five years, xAI's total computing power will eclipse the combined capacity of all other companies. Specifically, xAI plans to deploy the equivalent of 50 million NVIDIA H100-level AI GPUs over the next five years.
This objective not only surpasses current AI hardware standards in terms of scale but also promises substantial enhancements in energy efficiency.
According to industry analysis, xAI has already deployed over 450,000 GPUs across multiple global locations and aims to increase the total GPU count to 900,000 by the second quarter of 2026. This GPU investment, exceeding $30 billion, positions xAI as a frontrunner in AI hardware infrastructure deployment.
Deploying a gigawatt-scale high-density computing power cluster necessitates that xAI navigates complex municipal, power, and environmental issues akin to those faced by heavy industrial enterprises.
In January 2026, the US Environmental Protection Agency highlighted that xAI utilized natural gas turbines to generate power at its Memphis facility to bridge the substantial power gap, with some turbines operating without the requisite air quality permits.
Regulatory bodies unequivocally rejected xAI's assertion that 'temporary use exempts from regulation' and, after over a year of review, rendered a final decision, determining that this practice contravenes environmental regulations.
Power consumption presents another formidable challenge. An H100 AI accelerator consumes approximately 700W, and 50 million processors would require 35 gigawatts of power, equivalent to the typical output of 35 nuclear power plants.
Even with more advanced GPU architectures, a 50 ExaFLOPS cluster would still necessitate 4.685 GW of power. This demand will pose a significant challenge to the current energy infrastructure in the United States.
In the industry comparison chart, xAI's computing power scale has exhibited a sharp upward trajectory. While competitors like Anthropic and OpenAI still have similar-scale plans on their roadmaps for 2027, xAI has already transformed these plans into a tangible reality.
In this silicon-based expansion race, the key to victory may not reside in algorithmic innovation but in who can secure more power quotas and land permits.
References:
https://news.futunn.com/hk/post/67546729/the-world-s-first-gw-level-computing-cluster-musk-announces
https://www.chinaaet.com/article/3000172896