03/06 2026
447
Two days ago, a 'red lobster' quietly crawled onto Weibo.
But before that, it had already been at the center of the storm.
From its open-source release in January to being accused by Anthropic, forced to rename, impersonated, and finally 'recruited' by OpenAI to operate under a foundation—OpenClaw has been through two months of chaos before finally entering the vision (Chinese term, meaning 'field of vision' or 'radar') of Chinese developers in an official capacity.
But domestic netizens couldn't wait for the official version.
Open Bilibili, Douyin, or Xiaohongshu, and you'll see 'step-by-step tutorials' everywhere, with some claiming, 'Learn this and earn hundreds of thousands a month.' On e-commerce platforms, hands-on deployment guides have become a hundred-yuan business. Oddly, nearly every video comment section asks the same question: 'I've installed it. Now what? What can it do?'
On one side, ordinary users stare blankly at terminals while programmers wince at rapidly burning tokens. On the other side, cloud vendors' servers are selling out.
Two months have passed. It's time to define OpenClaw.
01
Users' 'Computing Power Black Hole,' a 'Lifesaver' for Model and Cloud Vendors
Over the past two months, countless people have been tempted by the ubiquitous OpenClaw promotions.
But when you click on those step-by-step tutorials and prepare to deploy an AI assistant yourself, the first hurdle—one the tutorials can never solve—hits you:
Insufficient hardware.
Unlike ChatGPT's stateless web-based Q&A, OpenClaw is a full-duplex, stateful daemon requiring an environmental sandbox. Remember how it was described when it first emerged? A 7×24 assistant always online.
This means it must constantly monitor messaging interfaces like Feishu and DingTalk.
It means the security issues exposed these past two months have forced reliable tutorials to recommend running it in Docker containers—packaging a separate runtime environment with massive memory usage, as you can imagine.
More importantly, it means OpenClaw can't do anything on its own. It must mount an underlying large model and pair with various skill plugins to function. Each additional plugin means another thread silently burning your resources in the background.
For ordinary users, OpenClaw is a computing power black hole. Home computers lack the configuration (Chinese term, meaning 'configuration' or 'setup') to handle it, and going offline means losing connection. To keep it truly 7×24 online, you can only rent cloud servers.
Thus, during OpenClaw's explosion in popularity, lightweight servers from major cloud vendors were snapped up.
But for cloud vendors, OpenClaw means far more than 'selling a few extra servers'—it's a drought-ending rain. For the past year or two, demand for large model training has surged, but inference-side computing power consumption has remained stagnant. Large enterprises build their own data centers, while small and medium-sized businesses' cloud adoption falls short of expectations. Those low-spec lightweight servers languish in warehouses, unsold. OpenClaw's resource-hungry, memory-intensive, always-online application model became the perfect outlet to digest this inventory.
For model vendors, OpenClaw is a dream come true. Domestic large models boast API calling capabilities but have struggled to find a stable C-end scenario to consume tokens consistently—getting users to download apps during the Spring Festival only to uninstall them afterward isn't sustainable. OpenClaw's agent logic is inherently a token shredder: completing a task requires dozens to hundreds of interactions with the model, consuming tens of thousands of tokens. Using an open-source community project to boost your own model's call volume is a cost-effective strategy by any measure.
So, looking back at the past two months of hype, what appears to be course-sellers' revelry on the surface is actually cloud and model vendors fueling the fire behind the scenes.
OpenClaw isn't just an application—it's a computing power black hole for users, a lifesaver for cloud vendors' inventory, and a token feast for model vendors.
02
The 'Lobster' Crawls into Chat Boxes: WeChat, QQ, and Feishu May Begin to Fade
If you evaluate OpenClaw solely through the lens of marketing hype, it's easy to conclude, 'This software isn't functionally successful.'
Of course, that's not objective. In reality, OpenClaw has achieved a milestone breakthrough in technical architecture—one that may signal the decline of WeChat, QQ, and Feishu.
People are no strangers to chatbots.
WeChat Enterprise opened its bot API long ago, and QQ bots have become a system feature. But these bots share a common systemic flaw: ecological fragmentation.
Domestic platforms like QQ and Feishu, and international ones like Discord and WhatsApp, use entirely different development frameworks. A bot built for Platform A must rewrite its code for Platform B; skills developed for Platform C can only gaze wistfully at Platform D. Every bot is an island, and every cross-platform migration requires rebuilding from scratch.
This architectural fragmentation stems from a fundamental issue: all IM bots are locked into their respective APIs. Developers aren't building for AI—they're building for a specific IM platform.
OpenClaw is different.
Based on Anthropic's MCP protocol, it decomposes agents into three standardized layers:
● Core: Handles underlying large model calls for reasoning and planning. This is the AI's brain, independent of any IM platform.
● Adapter: The bridge connecting different IM platforms. OpenClaw abstracts all message sending and receiving into unified events—whether QQ, WeChat, or Feishu, inputs and outputs follow standard formats. Platform differences are encapsulated here, so upper-layer logic doesn't care which IM it's interacting with.
● Skill: Modules executing specific tasks. Built on standardized interfaces, a single Skill can be directly reused across all supported IM platforms without modifying a single line of code.
The essence of this architecture is the first-ever complete decoupling of AI capabilities from IM platforms.
From now on, developers no longer build 'a bot for WeChat' or 'another bot for Feishu.' Instead, they develop a set of skills for OpenClaw and let it run automatically across all IMs.
This means users face the same AI assistant whether they open WeChat, QQ, or Feishu—with identical memories, skills, and conversation contexts. Unfinished tasks on WeChat today can continue seamlessly on Feishu tomorrow.
More critically, when all IMs become AI entry points, user logic for choosing IMs fundamentally reverses.
IMs used to be containers for relationships and ecological moats. You stayed on WeChat because friends were there; you opened Feishu because work demanded it. IM platforms controlled user entry points, with AI as a mere appendage.
But when AI truly achieves seamless cross-platform roaming, entry point weight shifts toward AI. Users no longer care 'which IM I'm using to talk to AI'—they only care 'can I reach my AI anytime, anywhere.' IMs gradually devolve into mere displays and microphones, becoming pipelines.
History repeats itself: telecom operators experienced this when WeChat emerged, rendering SMS and calls obsolete and reducing operators to 'pipelines.' Today, the same may happen to IMs: when AI transcends all IM platform boundaries to achieve 'single access, universal availability,' IMs' moats will no longer flow with user relationships but with AI conversation streams and workflows.
03
In This Hidden War, There Are No BATs
Of course, domestic AI giants couldn't sit idly by as OpenClaw caught fire. But the ones charging ahead aren't the BATs.
Scan the key players in this track (Chinese term, meaning 'track' or 'arena')—Moonshot AI, MiniMax, StepFun, DeepSeek—and you'll notice a fact: in this war, the BATs are no longer the obvious protagonists.
A covert power shift is underway. Why?
To answer, we must first understand the underlying business logic of agents like OpenClaw.
These products have an inherent trait called the 'agent loop.' Unlike traditional large models' one-off Q&A, an agent must follow a complex recursive process to complete a task: task decomposition → web search → material reading → information gap detection → re-searching → tool invocation → information feedback...""In this process, agent-to-large model interactions range from dozens to hundreds per task, consuming tens of thousands of tokens per task. Using premium models like GPT-5.2 or Gemini-3.1 Pro, inference costs for a complex task could skyrocket to dozens of dollars. This is the root cause of OpenClaw's reputation for 'burning money' too quickly.
But in China, this 'money-burning' pain point has transformed into a commercial opportunity for new players.
The opportunity has two ends. On the supply side, after two years of fierce competition, domestic large model companies have slashed token prices to rock-bottom levels but still lacked a stable C-end scenario to consume tokens consistently—getting users to download apps during the Spring Festival only to uninstall them afterward isn't sustainable.
On the demand side, products like OpenClaw inherently require massive tokens to run workflows, but calling foreign models is too costly for scale. On one side, idle computing power awaits release; on the other, voracious consumption demand goes unmet. The gap is perfectly filled by domestic model vendors offering 'one-click deployment' versions of OpenClaw: they lower usage barriers with self-developed products and reduce operational costs with cheap models, creating a perfect commercial closed loop (Chinese term, meaning 'closed loop').
Data proves this path works. OpenRouter's stats on OpenClaw's underlying large model API calls show the top performers aren't OpenAI, Google, or the BATs—but Moonshot AI's Kimi K2.5, MiniMax M2.5, StepFun's Step 3.5 Flash, and DeepSeek V3.2.
In this era of AI capability overflow, OpenClaw's unexpected rise has pointed domestic large models toward a differentiated track: the agent's operational logic dictates it's a high-token-consumption, high-frequency-interaction scenario. In this track, matching OpenAI's SOTA performance is no longer decisive—ultimate cost-effectiveness is the true core competitiveness (Chinese term, meaning 'competitive edge').
And cost-effectiveness wars are never the domain of giants.
The BATs built moats with search, e-commerce, and social networks, moats filled with user relationships, transaction closures, and content ecosystems. But today's new track hinges on core elements—computing power cost control, model inference efficiency, open-source ecosystem operation—that Moonshot AI and others have been honing for the past two years. When game rules shift from 'who has more users' to 'whose tokens are cheaper' and 'whose code is better,' the players at the table naturally change faces.
In this hidden war, there are no BATs.
04
Conclusion: The Old World Is Gone Forever
Three years ago, when ChatGPT 3.5 debuted, few believed it would change the world.
Today, after OpenClaw breaks through, more people ask the same question: 'What do I do with it?'""This scene feels familiar. Large language models trod this path before OpenClaw: geeks saw world-changing potential while ordinary people felt only confusion and alienation—technology advances while demand stagnates. This is the typical 'technology excess' phase all revolutionary products must traverse.
History repeatedly proves this rule. When Henry Ford asked what people wanted, the answer was 'a faster horse.' When Steve Jobs unveiled the iPhone, people questioned typing without a physical keyboard. We always tolerate cumbersome current situation (Chinese term, meaning 'current state' or 'status quo') while failing to imagine life reconstructed by automation.
The road isn't paved yet, but OpenClaw is already building cars.
But history also shows that once you experience riding in a car instead of walking, you can never go back.