The Commercial Quandary of OpenAI: Why the Subscription Model Falls Short for Large Language Models

01/26 2026 416

Graphic and Text | Tangjie

In a 2024 interview, Sam Altman confessed his aversion to advertisements, expressing significant discomfort at the thought of integrating ads into AI conversations. Fast forward to January 16, 2026, and OpenAI officially announced the trial of advertisements in the free and Go versions of ChatGPT. This stark 180-degree turn in just 18 months underscores a critical insight: the subscription model, widely regarded as the cornerstone of AI commercialization, may not be as robust as many assume.

In China, subscription revenue is frequently viewed as the litmus test for AI commercialization success. Whenever discussions arise about the AI gap between China and the U.S., a recurring argument emerges: Look at the U.S., where AI subscriptions cost tens of dollars monthly, while domestic price wars have driven API call fees to near zero. The conclusion seems straightforward—a market lacking willingness to pay cannot nurture great AI companies, and China has already lost in terms of business models.

This judgment holds some merit. The subscription model offers undeniable advantages: predictable revenue streams, quantifiable user retention, and the certainty that Wall Street craves. After Adobe transitioned from selling software discs to the Creative Cloud subscription model, its market value soared tenfold. Spotify revolutionized the music industry with its subscription model, and Netflix demonstrated that content subscriptions could underpin a market valuation of hundreds of billions of dollars.

However, OpenAI, often touted as the gold standard for AI subscriptions, presents a far more complex financial reality.

By the end of 2025, OpenAI's annualized revenue neared $20 billion. Yet, according to internal documents obtained by The Wall Street Journal, the company projected a net loss of approximately $9 billion for 2025—spending $1.69 for every dollar earned. Deutsche Bank analysts bluntly stated, "No startup in history has operated with losses approaching this scale. We are in entirely uncharted territory."

These figures paint a paradoxical picture: the world's most successful subscription-based AI product is also one of the most heavily loss-making tech companies. Adobe, Spotify, and Netflix all reaped substantial profits from subscriptions, so why has the same model faltered in AI?

More notably, investors who have poured real money into OpenAI do not appear overly concerned with subscription revenue itself. According to the latest reports, OpenAI is seeking a valuation between $750 billion and $830 billion in its latest funding round—a figure clearly not derived from the "annual revenue multiplied by a certain multiple" logic of the subscription model. Even with the most aggressive SaaS valuation multiples, $20 billion in revenue cannot justify an $800 billion valuation.

What underpins this valuation is a different narrative: OpenAI is poised to become the underlying platform of the AI era, the infrastructure for all future intelligent applications. Investors are buying this story, not the $20 monthly membership fee.

This raises a curious double standard: when evaluating OpenAI, the "you will become everything" infrastructure logic is applied; when evaluating other AI companies, the "how much subscription revenue can you generate now" yardstick is used. If even OpenAI's investors do not believe the subscription model can justify an $800 billion valuation, should we not re-examine the yardstick itself?

01 The Structural Dilemma of the Subscription Model

If subscription revenue were truly the correct metric for valuing AI companies, OpenAI should be the most relaxed startup in the world. Its revenue surged from $2 billion in 2023 to nearly $4 billion in 2024 and $20 billion by the end of 2025—an astonishing growth rate. No SaaS product has grown faster.

Yet the reality shows that while the company enjoys the steepest subscription growth curve in history, it is also desperately seeking alternatives beyond subscriptions.

A Deutsche Bank report in October 2025 revealed that consumer spending on ChatGPT in Europe had nearly stalled since May of that year. Meanwhile, ChatGPT boasted 800 million weekly active users, but only around 40 million paid users—a conversion rate of less than 5%. This means that for every 20 users served, only one pays, while the other 19 continue using the service and consuming computing resources.

Traditional software also has many free users, but the difference lies in marginal costs. Each additional user of Photoshop imposes negligible server load on Adobe. Each additional user of ChatGPT requires more GPU processing. In January 2025, Altman admitted on social media that the $200 monthly Pro plan was still losing money for the company. Higher pricing leads to heavier usage; heavier usage leads to higher costs.

Some may argue that inference costs have been declining annually. True, but at least for now, the cost inflection point has not arrived. Moreover, even if costs do decline in the future, the same will apply to competitors, bringing the risk of price wars.

Beyond costs, the subscription model requires moats to lock in users. Adobe's subscriptions are stable because users' workflows, file formats, and operating habits are deeply tied to Photoshop; switching software means relearning everything. Netflix's subscriptions are sticky because it offers exclusive content—if you want to watch "Squid Game," you have to come here.

Conversational AI lacks such binding. ChatGPT, Claude, Gemini, and Grok all present a single dialog box where users input questions and receive answers, akin to having four or five Windows versions on the market. While long-term users may develop personal memories and preferences, the differences are negligible to new users. Yet the growth of the subscription model precisely relies on a continuous influx of new users.

With limited upside on the consumer side, OpenAI itself is shifting its focus to the enterprise market. In January 2026, Altman stated on X, "People think we're mainly ChatGPT, but our API team is doing exceptionally well." The data does look impressive: in 2025, paid enterprise users surged from 3 million in June to 5 million in August, with over 1 million organizations using OpenAI's technology.

However, the enterprise market has its own barriers. For AI deployments involving core business operations, data security, compliance audits, and private deployments are all hard requirements. Cloud computing succeeded in the enterprise market because it sold infrastructure—customers kept their data and logic in-house. AI models are different; data must be fed into them to function, touching the most sensitive nerve of enterprises. The rise of open-source models in the enterprise market is largely because they bypass this trust issue.

Altman's changing attitude toward advertising further illustrates the problem. In 2024, he said advertising "made him very uneasy"; by January 2026, OpenAI officially announced the trial of ads in the free version. If subscriptions and enterprise business could truly justify an $800 billion valuation, there would be no need to venture into this previously unsettling territory.

Moreover, embedding ads into AI may be far more challenging than into search engines. Google search ads work because search results are a list of links; users must click through to find answers, and ads mixed among the links are relevant and clickable—Google sells the path to answers, and advertisers buy positions along that path. All conversational AI, including ChatGPT, provide answers directly. When a user asks, "Recommend a pair of noise-cancelling headphones," and AI responds with "Sony WH-1000XM5, with strong noise cancellation and 30-hour battery life," there is no further need to click on an ad.

A deeper issue is that once users suspect recommendations are driven by brand payments, the AI's persona as an objective assistant crumbles. OpenAI's solution is to place ads only below answers, clearly labeled "Sponsored," without affecting answer content or selling user data. Whether users will accept this remains too early to conclude. At present, it seems more like a forced experiment than a clear growth path.

02 AI Is Not SaaS

Over the past decade, the subscription model has achieved great success in the software industry. Companies like Adobe, Salesforce, and Slack have all realized stable growth through subscriptions. The underlying logic is clear: software is a standalone product. Photoshop is a design tool, Salesforce is a customer relationship management system, and Slack is a team collaboration platform—all have clear product boundaries, and users know what they are paying for.

As a startup, OpenAI also followed this logic: ChatGPT is a conversational product, and users subscribe monthly for more conversation allowances and stronger model capabilities.

It seemed logical, but over the past few years, we have discovered that AI is overflowing the boundaries of a "product."

In 2025, Microsoft embedded Copilot into the entire Office suite. When writing in Word, AI helps refine text; when processing data in Excel, AI assists with analysis. That same year, Adobe deeply integrated Firefly into Photoshop—users select an area, input a few words, and AI generates or replaces content. Google's Gemini permeated every corner of Workspace, from Docs to Sheets to Gmail.

These scenarios share a common trait: users do not come for "AI" itself; AI is merely a capability they invoke while accomplishing other tasks. They refine text while writing documents, fill content while editing images, draft emails while replying—using AI as a pass-by capability without even realizing it.

The premise of the subscription model is that users recognize the value of a standalone product and are willing to pay for continuous use. However, when AI becomes an embedded capability everywhere, this premise disappears. You would not subscribe separately for spell-check in Word or auto-cutout in Photoshop—AI is becoming something ubiquitous yet not a product in itself.

This may be the fundamental reason why the subscription model fails in AI. It is not that OpenAI's costs are too high or its moats too weak, but rather that AI as a category is shifting from a product to infrastructure—and infrastructure does not make money this way.

DeepSeek's choice, to some extent, follows this logic to its conclusion.

DeepSeek completely abandoned subscriptions, making its code and weights fully open-source and offering API calls nearly for free. After the R1 model's release in January 2025, DeepSeek gained over 100 million new users in seven days without any advertising spend. It then topped China's AI app monthly active user charts for two consecutive quarters, reaching nearly 190 million MAUs. From the three major telecom operators to China Southern Power Grid and PetroChina, from Lenovo and Huawei to major cloud providers, integrating DeepSeek has become an industry standard.

DeepSeek is betting that when you become the foundation for enough enterprises, commercial value will naturally emerge elsewhere—through enterprise services, custom development, or some currently unclear form.

Alibaba's QianWen takes a different approach. In January 2026, QianWen fully integrated into Taobao, Alipay, Fliggy, and Gaode, allowing users to complete decision-making and payment in a single sentence. AI itself is free, but Alibaba takes a commission on every transaction facilitated by AI. This treats AI as part of its ecosystem's transaction chain rather than a standalone product.

ByteDance's exploration also offers insights. In December 2025, Doubao collaborated with Nubia to launch an AI phone. Users could say, "Help me order the cheapest milk tea," and AI would compare prices across platforms, place the order, and complete payment—all without opening any apps. Although this feature faced collective blockades from various apps, it directly highlighted AI's revolutionary potential within the existing mobile internet interest chain.

Whether these attempts will succeed remains unknown, but their existence alone indicates one thing: the subscription model, once the "default answer" for AI commercialization, is now being questioned. If AI's essence is infrastructure rather than a product, then measuring it with SaaS yardsticks may have been wrong from the start.

03 Conclusion

History provides a reference.

In 2007, Google announced that Android would be open-source. Wall Street was puzzled: without operating system licensing fees, how would this project make money? At the time, Nokia's Symbian and Microsoft's Windows Mobile both charged fees. Google's response was that the operating system was not the destination but the gateway; the gateway would be low-cost, and profits would come from the ecosystem.

A decade later, Android dominated over 80% of the global smartphone market. Google built a commercial empire far larger than any licensing fees through search, advertising, and the Play Store. Companies that insisted on selling operating systems as products became historical footnotes.

What DeepSeek, Alibaba, and others are doing today follows the same underlying logic—AI is not the destination but infrastructure; infrastructure is low-cost or free, and profits come from the transactions and services it supports.

Of course, Android's success had specific historical context: the mobile internet boom, smartphone adoption waves, and application ecosystems building from scratch. Whether today's AI market is at a similar structural inflection point remains uncertain. Whether DeepSeek's open-source strategy can truly translate into commercial revenue or whether Alibaba's transaction loops can change user habits are all unknowns.

But one thing may already be clear: what remains uncertain is which infrastructure model will prevail, but the subscription model itself is unlikely to be the future of large language models.

Disclaimer: This article is for learning and communication purposes only and does not constitute investment advice. Please like, share, and repost. Your support is our motivation to update!

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.