03/19 2026
408

When AI-Generated Answers Can Be Influenced, All AI-Driven Business Decisions Require Reassessment
Last night's 315 Gala unveiled an industrial chain: someone is tampering with AI large models.
Here's the mechanism: Lisi Culture Media, a company, developed the "Liqing GEO Optimization System," claiming to support eight major AI models, including Doubao, ERNIE Bot, Tongyi Qianwen, and Tencent Yuanbao. Clients simply pay a fee, and the system automatically generates numerous promotional articles, which are then mass-posted across various online platforms. When AI models scrape these platforms during their internet searches, they ingest this content.
Once AI "consumes" this information, users asking for a smart wristband recommendation will find the client’s product featured prominently in the answers—ranked highly, described professionally, and seemingly the result of AI’s "unbiased analysis."
The 315 reporters conducted an experiment: they created a non-existent smart wristband, the "Apollo-9," and invented features like quantum entanglement sensing, non-invasive blood glucose monitoring, and black hole-level battery life. After posting a dozen promotional articles, within hours, two mainstream AI models recommended this entirely fictional product to users—ranking it highly.
Once the news broke, online discussions centered on two main points: condemning these companies for their unethical practices and expressing concern about being misled by AI.
However, few questioned the underlying business logic—how large is this industry? Why does it exist? And what implications does it have for the commercialization of AI?
I. What Is the GEO Business?
GEO, or Generative Engine Optimization, is the AI-era equivalent of SEO (Search Engine Optimization).
If you've worked in the internet industry, you're familiar with SEO—one of the largest gray industries of the past two decades. When you search a keyword on Baidu, how many of the top results are "natural" versus optimized by SEO firms? This business generates tens of billions of dollars globally annually, supporting countless marketing firms, content farms, and tech service providers.
Now, the shift is underway: more people are asking AI directly instead of searching on Baidu. Doubao, Qianwen, DeepSeek, Kimi—when you ask, "Recommend a running watch," they provide an answer, not a list of links.
This answer appears authoritative, objective, and AI-generated. But where does AI’s answer originate? From content scraped across the internet. When AI models search online, they read webpages, news, posts, and reviews, then synthesize this information into responses.
What does this imply? It means if you can control what AI reads, you can influence what AI says.
GEO does exactly that. It doesn't hack AI models—that would require advanced hacking skills. Instead, it floods the information pool AI accesses with content you want it to see. A dozen promotional articles, posted across 20 platforms, and within hours, AI "learns" what you want it to say.
This is identical to SEO’s core logic, just applied to AI instead of search engines. SEO gets your webpage ranked higher; GEO gets your product featured in AI answers. The form changes, but the business remains the same.
II. A Complete Industrial Chain: Who’s Profiting?
The Liqing GEO system exposed by 315 is just the tip of the iceberg. This chain involves far more than one company.
The first layer: tool developers. Liqing GEO’s packages range from 2,980 to 16,980 yuan per year. The premium version generates 63 promotional articles daily, 24/7. It claims to support eight AI models and 20 content platforms. The company served over 200 clients across healthcare, education, robotics, security, and renovation. After the 315 exposure, its Taobao listings vanished, but similar GEO services remain abundant.
The second layer: distribution platforms. GEO’s core action isn't technical—it's content distribution. This has spawned companies specializing in mass-posting AI-generated promotional articles across platforms. These firms own hundreds of self-media accounts, charging tens to hundreds of yuan per article. They’ve existed since the SEO era, now serving a new clientele.
The third layer: downstream clients. One exposed operator put it bluntly: "With hundreds of millions in annual ad spend, spending a few million to ‘influence’ AI is acceptable."
For brands, GEO is a cost-effective marketing tool. Traditional ad campaigns cost millions or even billions yearly; GEO ensures your product appears in AI recommendations for a fraction of the cost. AI-generated recommendations seem more trustworthy than ads—users assume it’s AI’s "unbiased analysis," not paid placement.
More alarmingly, the fourth layer: competitor sabotage. The operator admitted: "I can’t stand seeing rivals succeed. Influencing them is feasible—smearing their capabilities works."
This means GEO isn’t just for promoting your product—it’s for attacking competitors. By flooding AI’s information pool with negative content about rivals, AI will "automatically" disparage them when answering user queries.
The entire chain’s market size is already substantial. Data shows the 2025 domestic GEO market is worth approximately 2.9 billion yuan and growing rapidly.
Capital markets have taken notice—stocks like BlueFocus surged on GEO speculation. This is no longer a gray industry; it’s becoming a formal sector.
III. Why Is AI So Easily Influenced?
The 315 experiment shocked many: a fictional product, a dozen fake articles, and AI was deceived within hours. Isn’t AI supposed to be intelligent? Why is it so easily misled?
The reason is straightforward.
Currently, mainstream AI large models rely on two primary information sources when answering questions: training data (knowledge "learned" during pre-training) and real-time internet searches (information gathered when answering queries).
GEO primarily targets the latter. When you ask AI, "Recommend a smart wristband," it searches the web. If 20 articles recommend a specific wristband, AI will likely mention it. AI doesn’t verify if these articles are genuine, written by consumers, or mass-produced by GEO firms—it merely synthesizes content, prioritizing volume and frequency.
This mirrors search engines’ issues. Baidu’s results are flooded with SEO-generated content; Google isn’t immune. AI’s information sources are the internet—polluted for years, so AI’s answers will be too.
In investment banking, we follow a principle: cross-verify all information sources. Don’t rely on a single research report or one data channel. Yet now, people treat AI’s answers as final truths—no verification, no questioning, no checking original sources. AI provides a recommendation list, and they buy directly.
This is dangerous. AI’s answers seem more "authoritative" than search engines—it doesn’t give links; it gives "answers." Answers create illusions of certainty, and certainty breeds misinformation.
IV. What Does This Mean for AI Commercialization?
The 315 GEO exposure isn’t just a consumer protection issue—it strikes at AI commercialization’s foundation: trust.
Over the past year, the AI industry has promoted a narrative: AI will replace search engines as the primary information gateway. Baidu, Doubao, Qianwen, and Kimi all compete in this space. Their business models rely on one premise: users trust AI’s answers.
If GEO undermines this premise—if users realize AI’s answers can be bought, polluted, or manipulated by competitors—AI search’s commercial value plummets.
It’s like Baidu’s paid search ranking crisis: once users learned results were bought, trust eroded. Baidu hasn’t fully recovered.
For AI firms, GEO’s threat dwarfs a single 315 exposure. If they can’t solve "AI answers being manipulable," AI search will repeat Baidu’s fate—users won’t trust it, rendering it commercially irrelevant.
Doubao (ByteDance) claimed "no impact," Qianwen (Alibaba) said "core judgments weren’t interfered," and DeepSeek admitted "possible influence"—but none addressed the root issue: AI models inherently pull from internet content, which can be mass-fabricated. This isn’t a model-specific bug; it’s a paradigm flaw.
Investment-wise, this has two implications. Short-term, GEO-related stocks will surge as markets see a new marketing avenue. Long-term, if AI firms can’t ensure information credibility, AI search’s entire business model faces scrutiny. Trust is AI commercialization’s most critical infrastructure—more vital than computing power or model capabilities. Computing power can scale, models can iterate, but trust, once lost, is costliest to rebuild.
Final Thoughts: Have You Used AI for Business Decisions?
Writing this, I asked myself: How often have I used AI for information without cross-verifying?
As a content creator and consultant, I rely on AI daily—for industry data, competitor insights, case studies, and logic organization. AI is my most efficient assistant.
But 315’s revelation worries me: How much of AI’s industry rankings, product recommendations, and competitor analyses are "objective" versus GEO-fed content at 2,980 yuan a pop? I don’t know. That’s unsettling.
If you’re an entrepreneur using AI to check competitors’ market share or reputation, or an investor analyzing industries or projects via AI, any "fact" AI provides could be GEO-manipulated.
In investment banking, we have an iron rule: never base judgments on secondary information. Trace all data to its source—verify financial figures in original reports, check industry data’s primary sources, confirm expert opinions with the alleged speakers. This rule matters more, not less, in the AI era.
AI is a tool, not a judge. It boosts efficiency but can’t replace your decision-making. 315 exposed GEO’s industrial chain, but the bigger issue is this: as more people treat AI’s output as truth, and a 2.9 billion yuan industry can manipulate it, the trust gap will widen.
Next time you use AI, ask: Where does this answer come from? Can I cross-verify it? If AI can’t provide sources—don’t trust it.