03/04 2026
327

After market close on March 2, MiniMax delivered its first annual results since going public. Financial reports showed that in 2025, the company's total revenue reached $79.038 million, up 158.9% year-on-year, serving over 236 million users globally.
In the cyclical validation of 'technology investment-commercialization', MiniMax has demonstrated that large model technology is gradually finding a balance between input and output.
Since the beginning of 2026, with the release of the new-generation model M2.5, the company's core business metric, average daily Token consumption, surged sixfold, with growth in programming scenarios exceeding tenfold. A demand-side revolution triggered by supply-side technological breakthroughs has already erupted.
The subsequent launch of MaxClaw, which rapidly deployed from the cloud to mobile, also became a core move in Minimax's construction of an AI ecosystem.
01
Breakthroughs on Multiple Fronts: Dual Growth in Scale and Quality
In 2025, MiniMax achieved breakthroughs on multiple fronts in technology R&D and commercialization, realizing high growth in both scale and quality.
In 2025, revenue from consumer-facing AI-native products reached $53.075 million, up 143.4% year-on-year, accounting for 67.2% of total revenue. The product matrix, represented by Hailuo AI, Talkie, and MiniMax Agent, quickly captured the global market through a strategy of 'technology democratization + scenario penetration'.
The B-end business demonstrated stronger growth elasticity. Revenue from open platforms and enterprise services reached $25.963 million, soaring 197.8% year-on-year, with its share rising to 32.8%.
The C-end and B-end are not isolated but form a synergistic effect of 'technology reuse + data feedback': massive user data accumulated on the C-end feeds back into B-end model training, making it more adaptable to industrial needs; the high gross margins brought by B-end solutions support continuous iteration of C-end technologies.
In terms of global layout, overseas revenue accounted for over 70%. In 2025, the company's international market revenue reached $57.663 million, accounting for 73.0% of total revenue, fully proving the cross-language and cross-cultural adaptability of MiniMax models, as well as its differentiated advantages in the global AI competition landscape.
By the end of 2025, the company's services had covered over 200 countries and regions worldwide, reaching over 236 million users and 214,000 enterprise clients.
From Europe and the Americas to Asia-Pacific, from the Middle East to Latin America, this vast global network not only brings sustained cash flow but also feeds massive real interaction data back into model iteration.
While expanding its business and revenue scale, MiniMax's profit structure showed significant changes. In 2025, the company's gross profit was approximately $20.079 million, surging 437.2% year-on-year, far exceeding revenue growth; the gross margin rose sharply from 12.2% in 2024 to 25.4%, indicating significant improvement in profitability.
Correspondingly, the company made structural adjustments in its expense structure. In 2025, MiniMax's R&D expenses were $250 million, up 33.8% year-on-year, significantly lower than revenue growth; marketing investment plummeted 40.3% year-on-year to $51.9 million.
This to some extent indicates that MiniMax's technological iteration has entered a period of dividend release, with scale effects gradually emerging. The customer acquisition logic has also shifted from the early marketing-driven model to a growth model driven by technological barriers, product strength, and user reputation.
02
M2.5 Technology Breakthrough: Driving Demand Explosion
Over the past two years, the narrative in the AI industry has been almost monopolized by the supply side: stronger models, faster chips, and larger data centers have become the absolute protagonists under the spotlight.
However, rapid technological advancements have not effectively translated into widespread industrial adoption—on the other side of continuously pushing capability limits are issues such as high costs and insufficient adaptability. A large amount of long-suppressed demand remains pent up, waiting for a window of release.
At the beginning of 2026, this window was opened by MiniMax M2.5. With 'large-scale cost reduction' as its core breakthrough, it ignited market demand, enabling Chinese large models to surpass U.S. counterparts in global mainstream platform invocation volumes for the first time.
Technologically, M2.5 scored 80.2% in the authoritative AI programming capability benchmark SWE-Bench Verified, approaching global top levels. Relying on its self-developed native Agent RL framework Forge, text model training accelerated by 40 times.
Meanwhile, its hybrid MoE architecture achieved an exquisite balance between 230 billion parameters and only 10 billion activated parameters, ensuring top-tier model performance while making private deployment of top-tier large models a reality, opening up new commercial application spaces for industries with high requirements for model security and customization, such as finance, healthcare, and government affairs.
This momentum directly translated into a 'cost moat' at the commercial level. M2.5 achieved an extreme performance-cost balance through its hybrid MoE architecture—working continuously at an output speed of 100 Tokens per second costs only $1 per hour. This means that $10,000 can support four AI Agents running non-stop for a full year.
Extreme cost-effectiveness directly ignited growth in invocation volumes. Within 12 hours of its release, M2.5 topped the OpenRouter popularity chart and claimed the top spot in invocation volumes within a week, with weekly invocations soaring to 3.07T tokens, surpassing the combined total of Kimi K2.5, GLM-5, and DeepSeek V3.2.
This explosive growth directly reflected in the company's business metrics: the average daily Token consumption of the M2 series text models in February 2026 had surged to over six times that of December 2025, with growth in the Coding Plan for programming scenarios exceeding tenfold.
The success of M2.5 is the result of both technological breakthroughs and commercial insights. It enables 'affordable and enjoyable' AI services to reach global developers and enterprises, providing a replicable and referential practical path for the AI large model industry to achieve supply-side technological achievements' commercial landing on the demand side.
03
Deploying MaxClaw: Expanding the Ecosystem Landscape
If M2.5 represents MiniMax's deep breakthrough at the model layer, then the rapidly deployed MaxClaw at the beginning of 2026 is its strategic move at the application layer.
From its cloud launch at the end of February to its global simultaneous release on mobile in early March, from model capability output to expert ecosystem construction, MaxClaw is using 'zero barriers' as a fulcrum to drive a paradigm shift of AI Agents from exclusive to the tech circle to mass popularity.
On February 26, MiniMax officially launched MaxClaw, a cloud-based AI assistant built on the open-source project OpenClaw, integrated into the MiniMax Agent web portal.
This move directly addressed OpenClaw's biggest pain point: deployment barriers. For most ordinary users attracted by OpenClaw's demo videos, complex operations such as configuring local environments, handling installation errors, and managing API Keys often halted their experience at the first step.
MaxClaw's solution is 'zero deployment, no configuration, no additional API fees.' Users do not need to prepare their own servers or apply for API keys; with a basic subscription to MiniMax Agent, they can enable the cloud-running OpenClaw with one click on the web portal.
On March 3, MaxClaw was simultaneously launched globally on the MiniMax App mobile platform (iOS and Android). Users can directly run OpenClaw on their phones, create and enable experts, assign tasks, and receive deliverables, with all data synchronized in real-time with the web portal.
This means that the capability boundaries of AI Agents have extended from desktop to mobile scenarios, making 'starting workflows anytime, anywhere' a reality.
Simultaneously upgraded with MaxClaw is the Expert 2.0 function of MiniMax Agent. This function aims to precipitate (precipitate) user-describable professional skills into reusable expert Agents. Users do not need to consider complex configurations such as Skill, SubAgent, or MCP; by simply describing task goals in natural language, the system can automatically complete SOP sorting, tool orchestration, and capability configuration.
The launch of MaxClaw is not merely the addition of a single product function but a key step in MiniMax's strategic ascension. Against the backdrop of the OpenClaw community having accumulated a massive number of plugins and cross-platform adaptation solutions, MaxClaw quickly opens up application scenarios across various industries by being compatible with OpenClaw, transforming the open-source community's wisdom into its own ecological advantage and achieving rapid integration of ecological resources.
More importantly, MaxClaw is validating a new closed loop for large model commercialization. When AI evolves from a 'dialog box' into a 'digital workforce' capable of truly delivering value, users' willingness and stickiness to pay for subscriptions will significantly increase.
And all of this begins with the daily tens of billions of model invocations—every flow of Tokens is quietly reshaping the boundaries of intelligence.
-End-