08/14 2025
513
"AGI is not a particularly useful term." On the morning of August 11, OpenAI CEO Sam Altman casually remarked on CNBC's 'Squawk Box,' directly discarding their mission statement that had been displayed on their official website for six years – "Ensuring that artificial general intelligence (AGI) benefits all of humanity" – into the realm of obsolescence.
Three days prior, GPT-5 had just been launched, and on the same day, media revealed that OpenAI was in talks with Thrive Capital to complete a new round of old share transfers at a valuation of approximately $500 billion. On one side stands the most expensive AI company in history, and on the other, the CEO spearheading the move to 'demystify' the industry's totem. This scene resembles a plot twist from a science fiction film.
Altman's shift in tone was not spontaneous but rather an external manifestation of OpenAI's simultaneous 'gear shifts' across technology, business, and regulation:
Technology shift: Transitioning from an 'omnipotent deity' to a 'versatile performer,' replacing the binary narrative of 'AGI or not' with a graded capability framework;
Business shift: Racing towards an annual recurring revenue (ARR) of $20 billion while still adhering to 'long-term losses,' using capital leverage to offset profit pressure;
Regulatory shift: Actively diluting the concept of 'AGI' to lower Capitol Hill's sensitivity towards 'a private company on the verge of creating a deity.'
Below, we will unpack the calculations and costs behind this 'disenchantment' movement using three main threads.
GPT-5 Has No 'Singularity,' Only 'Scales' – An 'Incremental' Launch Event
On August 8, GPT-5 was rolled out to all ChatGPT users. OpenAI's official terminology described it as smarter, faster, and more useful, particularly in writing, programming, and medical Q&A. However, the prevalent term on social media was 'is that it?' Professor Wendy Hall of the University of Southampton bluntly stated: "From all perspectives, this is an incremental upgrade, not a revolution."
If we create a simplified 'capability ruler' based on the 'number of parameters × training efficiency × inference speed' of each GPT model, we find:
GPT-3 → GPT-3.5: Scale +1, unlocking Few-Shot prompts;
GPT-3.5 → GPT-4: Scale +2, with significant enhancements in multimodal and logical chaining;
GPT-4 → GPT-5: Scale +1.5, with the one-time pass rate for code increasing from 67% to 81% and medical license exam scores rising from 80% to 87%.
Each iteration is an improvement, but none experiences a 'vertical surge.' Altman acknowledged in an internal all-hands meeting that, according to his personal definition ("a system capable of continuous autonomous learning and self-improvement"), GPT-5 is still a distance from AGI.
Capability Grading: OpenAI's New Ruler
Since 'AGI' has been deemed ineffective, what does OpenAI use to narrate stories to customers and investors? The answer is 'capability grading.' At the FinRegLab AI symposium in November last year, Altman first proposed replacing the AGI narrative with the L1-L5 framework:
L1 Chatbot
L2 Reasoner (human-level problem-solving)
L3 Agent (capable of executing multi-step tasks)
L4 Innovator (can assist in scientific discoveries)
L5 Organization (can complete the work of an entire organization)
GPT-5 is internally assessed as 'L2+', meaning it touches the threshold of L3 in certain subfields but still lacks long-term memory, autonomous planning, and closed-loop environmental interaction. The benefits of this rhetoric are evident:
For customers: Purchase decisions no longer hinge on an 'omnipotent deity' but rather on implementable 'capability modules';
For regulators: Breaking 'reaching the sky in one leap' into five steps to alleviate 'singularity panic';
For investors: Breaking down the 'terminal valuation' into 'milestone valuations,' with each level capable of refinancing.
The More Money It Makes, the More Money It Loses – Soaring ARR and Money-Burning GPUs
According to data obtained by CNBC, OpenAI expects to lose $5 billion in 2024, corresponding to revenue of $3.7 billion; while ARR is poised to exceed $20 billion in 2025, losses will amplify simultaneously. Altman bluntly stated in front of the camera: "As long as the model's capability curve continues to steepen, the rational choice is to continue losing money."
Translating this into a financial model:
Revenue side: Explosive growth in ChatGPT subscriptions ($20/month), API calls ($0.06 per 1k tokens), and enterprise customization (annual fees in the millions);
Cost side: Training GPT-5 burned approximately $630 million in one go, with inference costs increasing exponentially with user volume;
Capital side: A $500 billion valuation = discounted future cash flows + 'singularity options,' the latter requiring continuous storytelling.
In other words, OpenAI is using 'losses' to buy the 'right to incur greater losses,' and as long as the story remains unproven false, capital will continue to pour in.
From 'Non-Profit' to 'Semi-Profit' to 'Shadow IPO'
In 2015, OpenAI was non-profit; in 2019, it transitioned to 'capped profitability'; in 2025, it is undergoing a 'shadow IPO' – not going public but allowing old shareholders and new funds to trade existing shares at a $500 billion valuation. The benefits of this approach:
1. Avoiding cumbersome SEC disclosures, keeping training data, algorithm details, and security risks concealed;
2. Escaping the scrutiny of public market quarterly financial reports on 'long-term losses';
3. Creating a sense of scarcity with 'old shares,' anchoring the valuation at the highest level.
Altman chuckled in the interview: "It's really great not to go public." Translated, this means: As long as the private market offers 'unlimited bullets,' they won't put themselves in the spotlight.
Rewriting 'Creating a Deity' as 'Creating Tools' – Washington's 'AGI Allergy'
In the summer of 2024, the U.S. Senate held three closed-door AI briefings, with the theme escalating from 'AI safety' to 'AGI governance.' The paramount concern for senators: If OpenAI announces the achievement of AGI, does it mean a private company possesses 'quasi-national' level power?
When Altman testified before Congress, he used the analogy of 'AGI being like the power grid' to illustrate that OpenAI would become infrastructure rather than power itself. However, this rhetoric clearly failed to dispel doubts. As a result, in 2025, we saw OpenAI take the initiative to 'downgrade':
No longer mentioning AGI timelines;
Replacing 'general' with 'capability grading';
Emphasizing 'tool attributes' and downplaying 'subjective consciousness.'
The Mirror of the EU AI Act
The EU AI Act classifies AI systems into four risk levels: 'minimal, limited, high, unacceptable.' OpenAI's L1-L5 has a subtle mapping relationship with the EU's four levels:
L1-L2 correspond to 'minimal risk,' requiring only self-declaration;
L3 begins to involve 'high risk,' necessitating third-party audits;
L5 touches on 'systemic risk,' facing the strictest obligations.
By proactively self-grading, OpenAI shifts the regulatory game to the 'standard-setting' stage rather than waiting to be penalized after product launch.
Who Benefits, Who Gets Hurt?
Beneficiaries
OpenAI itself: The concept downgrade brings regulatory dividends, capital dividends, and customer dividends;
Microsoft: Azure is tied to OpenAI; the more stable the $500 billion valuation, the more stable the cloud revenue;
Application-layer developers: No longer need to guess 'when the singularity will come,' directly invoking APIs for vertical scenarios.
Injured Parties
Competitors: Anthropic and Google DeepMind still hold high the 'AGI banner,' appearing outdated;
Academic circles: AGI was once the largest 'research funding magnet' in the fields of AI ethics and AI safety; with the concept diluted, some projects face re-establishment;
The public: After repeated 'wolf coming' narratives, respect for AI has declined, potentially leading to a new wave of 'AI fatigue.'
When 'Creating a Deity' Ends, 'Creating Profits' Begins
Eight years ago, OpenAI ignited the public's imagination of AGI with a blog post titled 'Playing Dota 2 with Deep Reinforcement Learning'; eight years later, Sam Altman personally tucked this narrative into the museum.
The AGI banner has been folded and put away, but the scythes and hammers of AI continue to roar day and night. History has never risen linearly but has zigzagged forward in the spiral of 'creating a deity - disenchantment - recreating a deity.' The next time a company announces that it has 'achieved L4 Innovator,' let's hope we still remember Sam Altman's casual reminder in front of the CNBC camera in August 2025: "What matters is not the noun, but the scale."