Computing Power: The Modern-Day Money Printer! OpenAI's Revenue Rockets 10-Fold in Three Years

01/22 2026 408

When OpenAI's Chief Financial Officer unveiled in the latest announcement that the annualized recurring revenue (ARR) had soared to $20 billion, the entire tech sphere felt the reverberations of this data.

To put things in context, this figure stood at a mere $2 billion just two years prior. What's even more fascinating is that the company's computing power surged 9.5 times over the same timeframe, almost exactly mirroring the revenue growth trajectory.

“Investment in computing power propels research and model capabilities to new heights. Powerful models pave the way for superior products and wider adoption, which, in turn, fuels revenue growth. This revenue then funds the next wave of computing power investment and innovation,” OpenAI outlined its growth cycle in the announcement.

This cycle is empowering OpenAI to construct an increasingly formidable barrier in the AI race.

Public data indicates that OpenAI's computing power capacity was roughly 1.9GW in 2025, whereas Anthropic's was only around 0.6GW.

The disparity in computing power capacity is directly mirrored in revenue, with the former hitting $20 billion in ARR and the latter estimated at approximately $3 billion. This data trend clearly exposes a harsh truth for contemporary AI firms: without ample computing power, it's challenging to secure a competitive foothold in the market.

Industry observers have encapsulated this relationship as a new axiom: in the AI era, revenue growth is almost an inevitable outcome of expanded computing power. However, lurking behind this axiom is an equally unforgiving cost.

While $20 billion in ARR may seem impressive, for OpenAI, it's merely the basic fuel to keep its growth engine running. The company's coffers are perpetually in a state of earning more but spending even more.

In October 2025, research institute Epoch AI released data shedding light on OpenAI's staggering operating costs. In 2024 alone, the company poured $7 billion into computing resources, and this was solely for renting cloud computing power, excluding any initial investment in data center construction.

Although partnerships with cloud service providers like Microsoft and Oracle have met short-term computing power demands, in the long haul, constructing its own data centers has become an unavoidable path for OpenAI. Starting from late 2024, the company has been collaborating with NVIDIA, Oracle, SoftBank, and other vendors to push forward multiple GW-scale data center projects.

The upfront investment required for data center construction is colossal. A typical data center project can run into hundreds of millions of dollars just in hardware procurement, with construction and operational expenses representing ongoing long-term costs.

As model sizes continue to balloon, computing power demand grows exponentially. GPT-4's training reportedly utilized approximately 25,000 NVIDIA A100 chips, while the next-generation model may necessitate hundreds of thousands of the latest chips for training.

This magnitude of computing demand implies that even with $20 billion in annual revenue, OpenAI still needs to relentlessly seek new revenue streams to fill the computing power void.

Facing escalating computing power demands and financial strain, OpenAI is venturing into another potentially transformative arena: hardware. According to the latest revelation by Chris Lehane, Chief Global Affairs Officer, OpenAI is slated to launch its first device in the second half of 2026.

Multiple sources indicate that the hardware will take the form of a screenless AI smart pen. This hardware isn't merely a simple expansion of the product line but a pivotal element of OpenAI's revenue-computing power cycle strategy.

Hardware devices can introduce new revenue avenues for the company while also enabling more direct collection of user data and model optimization, ultimately forging a complete ecosystem from hardware to software to cloud services.

According to OpenAI's blueprint, the hardware release timeline has been refined from 'within two years' to 'the second half of 2026.' Previously, there were rumors that OpenAI's hardware faced hurdles due to a scarcity of computing resources, but the clear timeline now hints that the company may have achieved significant progress in securing computing power.

The launch of the hardware business will assist OpenAI in constructing a more diversified business model. Currently, the company's revenue primarily stems from enterprise API services and ChatGPT Plus subscriptions, while hardware sales, accessories, and potential subscription services will offer more stable cash flow.

This isn't merely about peddling devices; it's a crucial endeavor by OpenAI to redefine human-computer interaction in the AI era. A successful hardware product will enable OpenAI to directly engage with end-users, diminish reliance on third-party platforms, and provide the most direct application scenarios for its models.

References:

https://i.ifeng.com/c/8q3yvbQJkbs

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.