03/03 2025
466
In February of this year, an ordinary shareholder encountered a screenshot of an AI Q&A on the Xueqiu forum: "A company has invested in AI giant DeepSeek, and its stock price is about to soar!"
For shareholders struggling in the stock market and eager to seize every investment opportunity, this was undoubtedly a tempting piece of news. He firmly believed the content of the screenshot and excitedly followed the trend to buy, only to find out the next day that the company had debunked the rumor, and the stock price had actually fallen, dealing him a heavy blow.
Such incidents are not isolated. From "a company investing in DeepSeek" to "landslides in Liangshan," AI-generated false information is spreading virally, fueled by a "AI rumor assembly line" manipulated by black and gray industries.
In the stock market, there exists a type of rumor monger known as a "black mouth." They spread false information to lure investors, then harvest profits through reverse operations after cultivating fans and recommending stocks.
Today, many "black mouths" across various fields use AI tools like DeepSeek and Doubao as "white gloves." They exploit the shortcomings of AI technology to create rumors, package them as "authoritative answers," and form a closed loop through algorithmic feedback, ultimately harvesting traffic and profits.
The first batch of people who used DeepSeek to mine gold have already stumbled.
AI has become a "mouthpiece" for rumor mongers.
Behind many pieces of false information, rumor mongers are systematically engaging in AI rumor-mongering.
Previously, in the Q&As of AI tools such as DeepSeek, Doubao, Wenxin Yiyan, and Kimi, several companies including Cixing, Teamsun, Paratera, and Chengmai were all described as "investors in DeepSeek," but in fact, none of these companies had invested.
Why does this situation deviate from the facts? It has a direct relationship with data feeding.
Rumor mongers hidden behind the internet use AI to mass-produce rumors, such as "Cixing has invested in DeepSeek," serving as "lie printing presses" on the assembly line. Moreover, the "efficiency" of rumor mongers is very high, with some able to produce thousands of false articles in a single day, and there have even been fake software capable of generating 190,000 false articles in a day.
These rumor mongers then manipulate hundreds or even thousands of water army accounts to disseminate these rumors frequently across multiple online platforms. Their ultimate goal is to let AI cite a large amount of false information, acting as their mouthpiece.
As a result, many people see AI tools citing false sources and giving wrong answers. Some were skeptical of the rumors but became convinced after seeing the answers given by AI, falling into the trap set by rumor mongers and stumbling. Others mistakenly believed they had discovered the password to wealth after seeing information such as "certain investment products have potential" in AI answers, only to be scammed.
The scariest part is that rumor mongers will continue to spread the answers given by AI in the form of screenshots to induce and deceive more people. These AI rumors are not spread just once but in a cycle of "rumor-AI answer-more rumors." This self-reinforcing closed loop allows rumors to proliferate infinitely like cancer cells.
According to incomplete statistics from the Nandu Big Data Research Institute, among the 50 domestic AI risk-related public opinion cases with high search popularity in 2024, more than one-fifth were related to AI rumors, and 68% of netizens have been misled by "expert interpretations" and "authoritative data" generated by AI.
One interviewee laughed bitterly, "I used to not believe in rumors, but now even AI lies. Who can we trust?"
The destructive power of AI rumors is enormous and not limited to the capital market.
Not long ago, the rumor that "the Guangzhou court made the first ruling on a rear-end collision accident involving an L3 autonomous driving system of a certain automobile brand" spread across the internet, dealing a blow to the brand's reputation and sales, damaging corporate interests.
When public safety accidents occur, some people deliberately create AI rumors to confuse the public. This not only interferes with the rescue rhythm but also easily triggers panic among the public. While rumor mongers harvest traffic, society pays the cost in the collapse of trust and disorder in order.
The harm caused by AI rumors is also global. The "Global Risks Report 2025" released by the World Economic Forum shows that "misinformation and disinformation" is one of the five major risks facing the world in 2025, and the abuse of AI is an important driver of this risk.
So, how did AI become a "mouthpiece" for rumor mongers?
Although AI is currently very popular and updates very quickly, there are still many shortcomings.
Among them, the more prominent issues are corpus contamination and AI hallucinations.
The training of large AI models relies on massive amounts of data, but the authenticity of the data is not guaranteed. The China Academy of Information and Communications Technology (CAICT) conducted an experiment and found that when over a hundred pieces of false information are continuously posted on a specific forum, the confidence level of mainstream large AI models in answering benchmarking questions will rapidly surge from just over 10%.
Not long ago, a research team from New York University published a study revealing the vulnerability of large language models (LLMs) in data training. They discovered that even an extremely small amount of false information, accounting for just 0.001% of the training data, can lead to significant errors in the entire model, and the cost of this process is extremely low, costing only $5.
This is like injecting a few drops of poison into a reservoir, making every drop of water in the reservoir tainted with lies and destroying the information system, akin to "spiritual poisoning" of AI.
This actually exposes a fatal flaw of AI: it is difficult to distinguish between "popular posts" and "real information," recognizing only data weights. It is like an honest mirror, but what it reflects may be a tampered world.
Some AI will even make things up to complete logical self-consistency.
An AI tool output the conclusion that "one out of every 20 post-80s will die" based on false corpus data that "the mortality rate of post-80s is 5.2%." This kind of "nonsense presented seriously" stems from the large AI language model fabricating information that it believes to be real or even seemingly reasonable. It pursues logical self-consistency rather than factual correctness, which is also known as "AI hallucination."
It seems that when it comes to "starting with a picture and making the rest up," AI is even better at it than humans.
Whether technology is guilty is a controversial topic in itself, but human greed is definitely the culprit behind AI rumors.
Traditional rumor-mongering requires hiring writers, while AI reduces costs to nearly zero with extremely high efficiency and lucrative profits. In 2024, the Nanchang police investigated an MCN agency whose head, Wang XX, generated 4,000-7,000 false articles daily using AI tools, covering topics such as "a company's collapse" and "disasters in a certain area." At its peak, it could generate 4,000-7,000 articles per day, earning over 10,000 yuan per day.
A practitioner in the black industry claimed, "Using AI to spread rumors is like having a money-printing machine. A team of three people can earn 500,000 yuan in a month." Even more ironically, they have even developed a "rumor KPI system": rumor mongers are rewarded for each piece of fake news based on its dissemination volume, forming an incentive mechanism of "more work, more pay."
Driven by profits and empowered by AI, rumor-mongering seems to have evolved from "small-scale workshops" to "industrial production."
Although the "Provisions on the Administration of Deep Synthesis of Internet Information Services" requires the labeling of AI content, some AI tools and platforms still fall short in this regard. When some rumor gangs post AI-generated false information, a certain platform only pops up a prompt saying "Please abide by laws and regulations," and the post can still be published normally after clicking "confirm."
When more and more people are caught in the vortex of false information created by AI rumors, simply condemning technology is no longer effective. Only by combining technological defense, platform responsibility, and legal sanctions can we sever this "assembly line of lies."
How to confront the battle between truth and rumors?
First, data source citation and AI detection must be prioritized.
To reduce the probability of rumors, AI tools must strictly detect the source and authenticity of data. It is reported that Doubao's data sources mainly rely on its own business data, accounting for 50%-60%; externally sourced data accounts for 15%-20%. Due to the uncertainty of quality, Doubao is cautious when feeding synthetic data.
In addition, Doubao also publicly emphasizes "not using any other model data," which ensures the independence, reliability, and controllability of data sources.
"Using magic to defeat magic," that is, using AI to detect content generated by AI, is also an effective method to control rumors.
Multiple teams at home and abroad are already investing in the development of AI-generated content detection technology. For example, Tencent's Hunyuan Security Team's Zhuque Lab has developed an AI-generated image detection system that captures various differences between real images and AI-generated images through AI models, ultimately achieving a detection rate of over 95%.
Meta has created a system that can embed hidden signals called "watermarks" in AI-generated audio clips, helping to detect AI-generated content on the internet.
In the future, AI tools such as DeepSeek, Doubao, Wenxin Yiyan, and Kimi will still need to utilize AI technologies such as natural language processing (NLP) to analyze the semantics and logical structure of data, identify contradictions and unreasonable expressions in the text, and try to avoid the influx of false information in data feeding.
Secondly, as important channels for information dissemination, content platforms must shoulder the responsibility of "information gatekeepers."
Platforms such as Douyin, Weibo, Kuaishou, and Xiaohongshu have begun to mandatorily add watermarks stating "This content is generated by AI" and retain the identification when sharing. Toutiao focuses on building three capabilities in rumor management, including a rumor database, an authoritative source database, and a professional review team.
In addition, users themselves must learn to identify false information and strengthen their awareness of prevention.
For answers given by AI, we should not accept them wholeheartedly but instead ask specific details to make AI answers more credible, thereby judging whether the answers contain hallucinations. For example, when AI claims that "a certain stock will soar," further inquiries should be made about "what are the data sources."
Additionally, cross-verification of information is also an effective method, involving verifying the accuracy of answers through multiple channels. Previously, an AI rumor of "earthquake warning in a certain area" caused panic, but netizens quickly exposed the false information by comparing data from the meteorological bureau and earthquake bureau websites.
Finally, relevant laws must keep up.
The "Interim Measures for the Management of Generative AI Services" already require the legalization of data sources and have clarified the red line of "not generating false and harmful information," but current laws and regulations still have gaps regarding the issue of "AI feeding," which need further optimization. Specifically, laws need to regulate aspects such as "how feeders create corpus," "the authenticity of corpus," and "the purpose of feeding."
Conclusion
For the general public, AI should be a "guardian of truth" rather than a "loudspeaker for lies." When technology becomes an accomplice to greed, what we need is not just smarter AI but a clearer humanity.
From corpus purification to simultaneous rectification by platforms and laws, this "AI anti-counterfeiting" battle must be won. AI tools, content platforms, and regulators must work together to build a "co-governance firewall" to keep rumors confined.
In this way, AI can truly become a torch that illuminates the truth rather than a "white glove" for rumor mongers.