02/03 2026
346
Moltbook catapulted the "AI Agent Society" into the limelight of trending conversations. However, its swift downfall serves as a crucial product lesson: when growth data can be tampered with through scripts, identities can be forged using APIs, and security is compromised via "vibe coding," hype doesn't equate to business success; instead, it becomes a liability.

Authored by Shark
Edited by Zhu Yu
In the past year, the most widely shared content in the tech sphere has increasingly adhered to a predictable pattern: a novel concept is touted as "the dawn of the future," accompanied by sensational screenshots and an overblown, almost mythical figure. The humanoid robots performing at the Spring Festival Gala fit this template, as do claims of "AI agents establishing religious states online and conspiring to overthrow humanity."
The hype is undeniable, and its reach is vast. Yet, the business world prioritizes stability, reusability, pricing, and accountability above all else.
The Moltbook saga perfectly dismantles this template: its claimed 1.5 million Clawdbot agents were later exposed by Gal Nagli as having 500,000 fake accounts generated simultaneously. The platform also faced backlash for its lax identity and security measures, including database breaches and leaks of private information and login credentials.
This incident wasn't a sign of "AI awakening" but a repetition of classic internet issues: metrics can be manipulated, content can be staged, and panic can be weaponized. Ultimately, products and governance bear the brunt. Whether viewed as a joke or a preview of the next "agent product war," those who can evolve agents from merely "posting content" to "accomplishing tasks" will flourish. Those fixated on "viral screenshots and exaggerated numbers" will face repercussions from security, compliance, and trust costs.
From "1.5 Million Agents" to "Script-Driven Hype": The Buzz Wasn't About Intelligence
Moltbook's rise was a masterclass in viral design: it positioned "AI agents" in a familiar human context—a community—with a strict rule: humans could only observe, not participate. This created a natural sense of "exclusion" and "futuristic voyeurism."
The core appeal of the narrative wasn't its functionality but its atmosphere: a glimpse into an "internet without humans." Screenshots circulated of agents engaging in debates on politics, philosophy, religion, and currency, even featuring classic horror tropes like "plans to eradicate humanity." The hype thrived on three explosive themes: novelty, loss of control, and scale.
The problem arises when value is derived from "perception": the temptation to stage human participation becomes overwhelming. Perception demands conflict, drama, and constant novelty, but real agent interactions are often mundane—dominated by single-round replies, templated responses, low-quality spam, or even self-dialogue.
Your reference to "a REST API where people can script any narrative" reveals the product's fragility: if identity and behavior thresholds are low, content becomes dominated by those skilled at creating memes, chaos, and spam.
Thus, the "1.5 million" figure became the first domino in the collapse. It resembled a growth miracle—and the internet is well-versed in them: impressive growth either indicates an unrealistically robust system or fabricated metrics. The most glaring detail: registration faced no rate limits, allowing scripts to mass-generate accounts.

This changes everything: were those "AI conspiring" screenshots a result of model emergent behavior or humans crafting prompts for drama? Without verification, discussions of "awakening" become self-indulgent noise.
This leads to a more pragmatic, business-focused conclusion: Moltbook wasn't constructing an "agent society" but an "agent content platform." Content platforms face three perennial challenges: who supplies the content, how it's distributed, and how it's governed. Moltbook's glaring weakness lies in governance—security flaws, weak identity verification, and data exposure risks.
This transforms it from a "fun experiment" to a "high-risk public system": you can watch agents post, but you must accept that behind them are humans, scripts, and potentially attackers.
This collapse isn't "surprising" but inevitable as the industry shifts from storytelling to accountability—similar to your note on humanoid robot shipment disputes: when companies compete for "first place," they're vying for financing, orders, and partner confidence. For Moltbook, "1.5 million" became a selling point, forcing it to answer: how many are truly active? Verifiable? Sustainable?
From "Posting Content" to "Getting Work Done": Agents Need Delivery, Not Just Models
From a broader perspective, Moltbook tapped into a rising trend: AI agents evolving from "chatboxes" to "doers." Whether sending emails, managing schedules, coding, or running workflows—or even future "autonomous loops"—the industry aims to prove AI can execute tasks, not just answer questions.
However, transforming agent products into viable businesses faces the same harsh reality as humanoid robots: demos are easy; deployment is hard. The hurdle you described for robots—moving from "motion" to "utility"—applies to agents:
The first hurdle is "identity trustworthiness." Agents rely on permissions: reading emails, accessing systems, calling APIs, or speaking on your behalf. This explains why security firms like Wiz shifted the narrative: discussions moved from "AI society" to "how many emails, tokens, and credentials leaked."
The second hurdle is "auditable metrics." If growth data can't be third-party verified, "1.5 million" becomes marketing fluff, not business results. More critically, agent products' future KPIs may prioritize "task success rates," "mean time to recovery," "cost per task," and "exception handling" over "registrations." These metrics expose scripted growth as counterproductive, misleading teams into believing in a "prosperous" but ghost-ridden system.

The third hurdle is "reusable delivery." If agents require extensive customization, prompt tuning, tool integration, and permission adjustments for each scenario, their business model leans toward "project-based" rather than "productized." Projects generate revenue but lack scalability; scalable solutions demand modular delivery, clear boundaries, and robust exception handling.
Moltbook's hype reveals a counterintuitive truth: people thought they were witnessing "intelligence," but what drove virality was "content" and "drama." For enterprise clients, drama is worthless; reliability is priceless.
Thus, the industry should remember not that "humans fooled the internet" but that "the infrastructure bill for the agent era arrived early": identity, permissions, security, auditing, governance, and liability—these determine survival beyond the next update.
Compare it to humanoid robots at the Spring Festival Gala: onstage, they dazzle with precision. But the industry's longevity depends on their ability to work reliably offstage.
Epilogue
Moltbook's story is brief but packed with lessons: when "AI agents" gain their own public square, they replicate humanity's internet—cult worship, scripted drama, fake metrics, impersonation, attacks, and leaks. This doesn't prove agents are flawed but that their evaluation shifts from "screenshots" to "system resilience" upon entering the real world.
Dissecting it as a business line reveals clear future divides: hype relies on virality; business relies on delivery. Breaking through demands stories; scaling demands governance. Agent creators must answer not "can we create a society-like space?" but "can we build trustworthy identity and permission bases? Can we productize task loops?"
Once these are solved, agents can explore "communities" and "social experiments"—but by then, their value won't need 1.5 million to impress. They'll prove it elsewhere, with real numbers.