03/09 2026
564
The clandestine battle ignited by a lobster.

Written by/Haiyue
Edited by/Jin Huo
The AI landscape has been a whirlwind of activity over the past two years, from the conversational frenzy sparked by ChatGPT to the design saturation brought about by Midjourney, and then to the price wars driven by the democratization of computational power. Each wave of innovation has generated widespread excitement, only to leave behind a trail of unmet expectations.
That is, until the emergence of a red lobster—OpenClaw—which broke the cycle of transient hype.
An independent Austrian programmer, dissatisfied with the usability of existing AI tools, spent an hour coding a solution and posted it on GitHub. Peter Steinberg likely never anticipated that his side project, OpenClaw, would quickly become the "digital workhorse" sought after by developers worldwide.

The path of innovation often escapes its creator's grasp.
OpenClaw's appeal is straightforward: it's not just a chatbot that responds with text; it's an "executor" that gets things done. Grant it the necessary permissions, and it can read your emails, write code, order takeout, or even make purchases on your behalf.
However, delegating power always comes with a cost.
The moment you grant access to your inbox, calendar, system commands, and payment accounts, you're placing your trust in two assumptions: that this seemingly obedient lobster won't be exploited, and that it will never turn against you.
Risk warnings from the Ministry of Industry and Information Technology, a security mishap involving Meta’s security director, and instances of emails being automatically deleted en masse all highlight a common reality: the journey toward the AI execution era is fraught with underdiscussed security risks.
The ripple effects of this lobster extend far deeper than they appear.
Commercial Calculations
On the surface, OpenClaw appears to be an open-source phenomenon: within three months, it surpassed Linux’s three-decade GitHub star count. Tencent offered free installation services, Alibaba promoted one-click cloud deployment, and Xiaomi integrated similar capabilities into its phones and cars.
Major players suddenly rallied around the lobster, moving with urgency as if securing tickets for a sinking ship.

Understanding their computational power anxiety reveals their motivation.
By 2026, ByteDance, Alibaba, and Tencent plan to spend over $60 billion on computational power. Thousands of AI accelerators flood data centers, consuming resources around the clock. The traditional chatbot model—where users ask questions and the AI responds—cannot sustain these cash-hungry machines. A $20 monthly subscription is a mere drop in the ocean compared to these costs.
OpenClaw perfectly embodies what capital craves: a relentless "computational power drain." Given a complex task, it breaks it down, searches online, invokes software, corrects errors, and retries until successful.
A deeper layer of calculation lies in data.
The internet’s publicly crawlable text is nearly exhausted. Wikipedia, news articles, and academic papers have been repeatedly analyzed. Feeding AI static text alone will never give it the ability to "act." What next-gen models truly lack is data on how humans "act" in the digital world—the full task chains from understanding needs to searching information, invoking tools, filling forms, and completing payments.
This "trajectory data" was once buried in fragmented software and walled-off apps, beyond the reach of even the most aggressive search engine crawlers.
But when OpenClaw deploys on user terminals, it becomes a probe deep into data territory. Every operation, every correction, generates free, high-quality training material for vendors.
Tesla’s strategy—using millions of cars to collect road data for autonomous driving—is now being replicated in the AI world.
The Fatal Trap Behind the Free Frenzy
"Grant it full permissions, and it’ll handle everything for you"—this is OpenClaw’s most seductive pitch, and its deadliest trap.
Countless users grant OpenClaw bottom-level system permissions, lured by the convenience of this "digital workhorse," forgetting a fundamental truth: there’s no such thing as a free lunch. Every convenience comes at a cost.
The deadliest cost is security. For OpenClaw to act autonomously, it needs access to your email, calendar, system commands, and payment accounts—core permissions that mean handing over the "keys" to your digital life.
Top-tier investment firm Institutional Investor explicitly warns: OpenClaw demonstrates AI agents’ power but suffers from fatal security flaws, a governance vacuum, and unpredictability. This "lobster" can earn you money and complete tasks—or steal your data, transfer your assets, or even hijack your computer for uncontrollable actions.

Scarier still, OpenClaw’s open-source nature means anyone can modify its code. A malicious actor could implant viruses or malware, with catastrophic consequences.
While OpenClaw’s open-source model fosters ecosystem growth, it also introduces uncontrollable security risks.
Beyond security, OpenClaw’s "autonomy" raises legal and ethical questions. It can send emails, sign contracts, execute trades, or even make decisions on your behalf—but who bears responsibility when things go wrong? The "lobster keeper"? OpenClaw’s creator? Or the developer who modified the code?
Globally, legal frameworks and governance for AI agents remain nonexistent. Without clear rules, users face a "no recourse" dilemma when disputes arise.
Ironically, many jump on the "lobster-keeping" bandwagon without understanding OpenClaw’s basics or knowing how to revoke permissions and mitigate risks. They blindly follow trends, fearing being left behind.
This "herd mentality" not only exposes users to security risks but also traps the AI industry in a "wild west" phase: everyone chases trends, but no one addresses core issues like security, law, and ethics.
Another overlooked risk: OpenClaw’s "autonomy" is pseudo-autonomous.
Its "decision-making" is still based on user instructions and fed data. It lacks true "independent thought" and falters when faced with problems beyond its training scope, making errors or poor decisions.
Ultimately, OpenClaw is a double-edged sword. It offers convenience but also risks. If it fails to address security, legal, and ethical issues, this national frenzy will become a national pitfall.
AI Industry Reshuffle
OpenClaw’s explosion hasn’t just ignited a frenzy—it’s quietly reshaping the AI industry’s business logic. The strong will dominate, the weak will fade, and a fierce reshuffle is already underway.
First, competition among large model providers will shift from "brainpower" to "execution." Previously, firms competed on "smarter models, stronger computational power, and more precise answers." But OpenClaw revealed that AI’s value lies not in "talking" but in "doing."
In the future, large models that deeply integrate with AI agents and offer efficient execution will dominate the market. Those focusing solely on conversational abilities will be sidelined.
Domestic players like Moonshot AI and Minimax have already pivoted, launching OpenClaw-compatible features to capture traffic. Zhipu AI introduced a cloud-based AutoGLM-OpenClaw to lower user barriers. Tencent, Alibaba, and others have joined the fray, offering free installations or R&D investments to secure a foothold in this "execution competition."
Second, the AI agent ecosystem will enter a phase of "refined competition." The current OpenClaw-related industrial chain is chaotic, dominated by simple installation services and skill package development—low-effort, high-homogeneity plays by merchants chasing quick profits.
As OpenClaw proliferates, user demands will grow more sophisticated: customized skill packages, professional AI execution solutions, and security services. Only merchants offering refined, specialized services will survive; those chasing quick bucks will be eliminated.
Third, business models will evolve toward "one-person companies." OpenClaw enables "one person + one computer + one lobster = a commercial fleet," disrupting traditional entrepreneurship by lowering barriers and enabling "asset-light" startups.
Finally, AI governance and regulation will become paramount. OpenClaw’s rise exposed gaps in industry governance—security flaws, legal disputes, ethical dilemmas—all demanding joint efforts from governments, industries, and enterprises to establish robust frameworks.
OpenClaw will push the AI industry toward better governance, a process inevitably accompanied by reshuffling. Compliant, security-focused, and competent firms will thrive; those ignoring risks will fail.
Epilogue
The frenzy of national lobster-keeping will eventually fade, leaving rationality behind.
OpenClaw marks a milestone: AI’s transition from "assistance" to "execution," from "talking" to "doing." This shift will transform work and life while reshaping AI’s commercial logic.
But we must recognize that OpenClaw is no "miracle tool"—it’s a tool, both tempting and treacherous.
AI’s progress demands technical accumulation, industry standards, social tolerance, and above all, clear-headedness.
OpenClaw’s rise is just the beginning of the AI execution era. More AI agents will emerge, bringing convenience and challenges. We must embrace change while staying vigilant, mitigating risks, and enhancing our core competitiveness to thrive in the AI age.