MiniMax Doesn't Need to Please the Open-Source Community

04/22 2026 567

Not everyone can be DeepSeek.

Author | Jingxing Editor | Gu Nian

"Stop pretending, stop lying, and remove the open-source label from your community homepage."

This comment came from an overseas community user under a post by MiniMax's head of developer relations.

The controversy stems from a dispute over the definition of open source. Recently, MiniMax quietly changed the open-source protocol for its MiniMax2.7 model, replacing the standard MIT protocol (an open-source software agreement) with a Modified-MIT protocol.

The new agreement overturns the core rule of MiniMax2.7's unrestricted commercial use, requiring authorization from MiniMax for all commercial purposes without any revenue exemption threshold.

One overseas user commented that this is not an open-source license, and if you deploy the model in any revenue-generating product without written permission, you risk breach of contract and copyright infringement lawsuits.

In response, MiniMax attempted to placate user outrage. Ryan Lee, MiniMax's head of developer relations, posted that the MiniMax2.7 model remains open, still supporting downloads, local operation, model fine-tuning, and the release of non-commercial projects based on the model.

The original intention of modifying the open-source protocol was to avoid misunderstandings. In past releases, officials found that the product was being used by third-party platforms offering hosting services in MiniMax's name, but the product quality was severely compromised.

These practices include, but are not limited to, excessive quantization (converting high-precision models to low-precision ones to improve inference speed but reduce performance), misuse of dialogue templates (which may cause the model to fail to parse instructions correctly and provide irrelevant answers), and silent swapping (claiming to use the MiniMax model but secretly replacing it with other low-performance models to deceive users into paying).

In short, under the previous open-source protocol, MiniMax series models have been abused by third-party platforms in a harsh (malicious) "bait-and-switch" manner for a long time, severely impacting product credibility.

The core purpose of third-party platforms abusing MiniMax models is to reduce costs and increase efficiency. Deploying lower-quality frontier open-source models more quickly can not only reduce server costs but also attract traffic by taking advantage of time differences, leveraging the brand effect of well-known models to earn margins for themselves, at the expense of brand credibility.

This explains why MiniMax is willing to face overseas user outrage to forcibly change the MIT protocol. As the agent craze brings hope for commercial profit growth in the large model industry, any abuse by a rogue platform could topple the brand's trust dominoes, thereby impacting business operations.

The Fastidious Foreigners Taught MiniMax a Lesson

This public opinion storm erupted in the overseas open-source community, which places greater importance on the definition of open source.

On Hugging Face, the world's largest open-source model community, users have a very strict definition of open source. Numerous user reviews indicate that users do not care whether model developers restrict commercial rights; their dissatisfaction stems more from MiniMax restricting commercial use while retaining the MIT label on open-source communities like Hugging Face.

Among various software licensing terms, the MIT protocol's conditions are relatively lenient. Its core definition is that software must be freely available to everyone for disposal, modification, copying, distribution, licensing, and even sale, with the only restriction being that the license notice must be included.

This means that large models with the MIT label must allow users and third-party platforms to use them even for commercial scenarios, freely modify the model code or weights, and freely deploy them locally. At the same time, no fees need to be paid, and no permission needs to be requested from the developer.

A more intuitive example is that DeepSeek has been collectively integrated by domestic cloud vendors and deployed on their servers, primarily due to its firm full-volume open-source strategy, which has allowed it to quickly build a vast developer ecosystem and gain a foothold in the market.

This is also the core of the community controversy surrounding MiniMax2.7. The product attracts developers to use it under the banner of open source while overturning the core promise of full openness with the modified protocol.

Meanwhile, as a flagship model, in MiniMax's official statements, the MiniMax2.7 model at the center of this storm has reached international first-tier model standards. In SWE-Pro for multiple programming languages, its accuracy matches GPT-5.3-Codex; in professional knowledge and task delivery capabilities, MiniMax2.7 scores second only to Opus 4.6, Sonnet 4.6, and GPT5.4, surpassing GPT5.3.

What makes developers anxious is the vague concept of commercial use in the agreement. According to the modified MIT protocol, users must not use MiniMax2.7 for paid services, supporting commercial products, services, or operations. Independent developers' paid SaaS products, third-party API hosting, and fine-tuned commercial deployments all risk "crossing the line." The final interpretation of authorization rests with MiniMax.

As a result, users continue to inquire whether their usage scenarios violate the agreement—such as using it for company workflows or distilling and selling code.

Some overseas users also stated that for the community, MiniMax2.7's modification of the open-source protocol is a step backward, but there are reasons for it. However, they hope that MiniMax can, in the future, set commercial use restrictions based on revenue and user scale, like Kimi or Llama.

MiniMax is not a pioneer in restricting model commercial use.

For example, Meta's Llama large model has allowed commercial use since its second-generation version but with restrictions. Commercial platform products with over 700 million monthly active users need to apply for additional authorization from Meta; Llama must not be used to train or improve other large models; when distributing, it must be prominently labeled that the service uses the Llama model; and it must not be used to develop competitive generative AI products.

Among domestic models, Kimi K2.5 also adopts a modified MIT protocol. According to the agreement, if a commercial product's monthly revenue exceeds $20 million or its monthly active users exceed 100 million, the "Kimi K2.5" label must be prominently displayed on the user interface. Otherwise, Kimi K2.5 does not restrict commercial use, modification, secondary distribution, or local deployment.

Zhipu has adopted a tiered open-source approach. Main models like GLM-5 use the MIT protocol, while specific models like CogView4 and GLM-TTS use the Apache 2.0 protocol. Flagship models like GLM-5-Turbo are closed-source, with API access only provided through official platforms.

The difference is that Apache 2.0 focuses more on patent protection, with explicit patent licensing and anti-litigation clauses, and mandates that developers label modified files. This makes Zhipu's strategy relatively flexible, allowing it to accumulate a developer ecosystem with open-source models, convert core customers in finance, government, and enterprises, and also seize agent opportunities to boost API revenue.

Compared to industry practices, MiniMax clearly has more confidence in open-sourcing its flagship models. Its founder, Yan Junjie, has publicly stated that if given a second chance, the model should be open-sourced from day one. All models have a "shelf life" of less than a year, and it's better to compete for the ecosystem than emphasize short-term monetization. Just as OpenAI's core capability is no longer the model but the ChatGPT brand and mindshare.

This means that latecomers are repeating the story of Android catching up to Apple back then; only by adhering to openness can there be sustained growth.

MiniMax's Open-Source Dilemma

Putting aside the conceptual debate over open source, from a commercial perspective, MiniMax's tightening of the open-source protocol is more of a helpless move. Internet giants with cloud+AI layouts and independent large model vendors without cloud businesses have different open-source logics from the start.

In January, at the Tencent Cloud Partner Conference, Tencent Cloud Vice President Wu Qisheng judged that in the first half of AI, the main focus was on selling resources, monetizing through the sale of scarce GPU computing power, a simple and crude approach that did not delve into industry applications.

This is the logic behind Google, Alibaba, Tencent, and other giants open-sourcing their large models. For these giants, large models are a traffic entry point for a composite business model.

As Alibaba's Chairman stated at the Dubai World Government Summit:

"Our Qwen large model is open-source, and we also operate a cloud computing business. As users train, develop, and infer models on our infrastructure, we achieve commercial monetization."

Specifically, Alibaba's Qwen series models, with their full-volume open-source strategy, have attracted a vast number of developers and enterprises globally, building a massive ecosystem in the market. However, enterprises' training, fine-tuning, and inference of Qwen must consume vast computing resources, generating multi-dimensional needs such as data storage, data analysis, and data security.

Alibaba Cloud is the best recipient of these needs, not only optimizing inference for computing clusters but also providing value-added services such as model fine-tuning and data protection, ultimately locking customers in the cloud with technological advantages and driving revenue growth.

If enterprises want to migrate cloud service providers, they must rewrite model logic, migrate data again, and match security policies. Given the high migration costs, capturing customers with open-source models means the cloud business can secure their full range of commercial needs for subsequent development tools, data, and API calls.

Small and medium-sized large model vendors do not have such stories to tell.

In an interview with National Business Daily, Zhipu Chairman Liu Debing stated that Zhipu's commercial revenue increased rapidly after open-sourcing because enterprise bosses would only recognize the technology and proceed with cooperation after gaining a deep understanding of open-source models.

Compared to the heavy-asset business model of giants building their own "utilities," small and medium-sized vendors spend a large amount of money annually on leasing computing power to train models. Their business models also revolve entirely around the model itself. This forces enterprises to abandon a purely closed-source route—only by creating market presence can independent vendors survive in front of giants.

According to financial report definitions, MiniMax's revenue structure can be roughly divided into B and C ends. The former includes revenue from M series model token call fees and subscription payments, enterprise-level solutions, etc.; the latter includes multimodal model products such as Hailuo AI, Talkie, Xingye, and MiniMax Agent, monetized through membership subscriptions, virtual goods sales, etc.

The difference is that C-end product revenue is larger but targets ordinary users with limited funds and payment willingness, with a gross margin of about 4.7% in 2025; B-end business gross margins are significantly higher, around 70% in the same period.

As the agent craze sweeps the market, MiniMax is shifting from a C-end focus to a balance of B and C ends. At the annual financial report conference call in March, MiniMax CEO Yan Junjie stated that the company's value is determined by intelligence density multiplied by token throughput.

In other words, similar to Alibaba's comprehensive organizational reform to chase the token economy, MiniMax also does not want to miss out on the high-growth, high-margin commercial value brought by the API opportunity.

Public data shows that in February 2026, the average daily token consumption of the M2 series text model had increased more than sixfold compared to December 2025, with token consumption from the Coding Plan increasing more than tenfold. In the future, using C-end data to feed back into training and monetizing validated capabilities on the B end will become MiniMax's core growth direction.

On the surface, MiniMax's tightening of the open-source protocol betrays the community, but the ultimate goal is still to safeguard API services, the core monetization channel. Using a product firewall to screen commercial partners and choosing a middle ground between product breakout and commercial monetization ensures the flagship model's community reputation.

Against the backdrop of Zhipu GLM-5.1 and Alibaba Qwen3.5-Omni collectively going closed-source, the domestic large model industry is at a crossroads: Should it choose open-source for visibility or closed-source for profit?

As independent vendors, they must continuously trial and error to find the answer. However, for AI enterprises, the competitiveness of the model is always the top priority.

After all, Zhipu's Liu Debing had also previously stated: AI will be the future for the next 100 years. Compared to direct closed-source monetization, adhering to openness to cultivate the industry may be a more important task. However, judging by current actions, they are also starting to lean towards Anthropic's closed-source route.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.