12/29 2025
365

MOMO’s Presence in the Yangtze River Delta
Doubao has successfully secured a spot at the Spring Festival Gala. Amidst this nationwide celebration, can AI, beyond chasing downloads and users, contribute positively to fraud prevention? After all, AI has made it increasingly difficult to distinguish between virtual and real, leaving everyone trapped in the conundrum of discerning Teacher Dou's creations.
Thumbtack went viral with the line, "His singing causes a pain that even ibuprofen can't relieve."
My first encounter with Thumbtack was through his rendition of Meng Tingwei’s "The Face on the Moon," where he transformed the originally subtle and sentimental farewell of a young girl into a raw, emotional outpouring of her grievances.
Apart from this song, his covers of classics like "I Love You So Much," "Tears in the Ocean," and "Star Wishes" have been heavily promoted by algorithms across various platforms.
Curious about the sudden emergence of such a powerful singer, I did some research and discovered that Thumbtack is a virtual artist created by Kugou using AI technology. Honestly, I couldn't tell the difference, and when I asked around, few of my friends were aware that Thumbtack was an AI.
This reminds me of a popular meme lately: "Seems like another piece from Teacher Dou’s collection."
Originally, this phrase was used by netizens to mock the early, somewhat stiff content generated by AI tools like Doubao. But now, it has evolved to describe particularly polished copywriting, slightly "too perfect" yet odd-looking images, or videos that defy common sense yet appear authentic at first glance.
The popularity of this meme highlights the need for our eyes and ears to develop a new skill in this era: discerning which content might be machine-generated.
Unfortunately, as AI advances, the era where carbon-based (human) and silicon-based (AI) content are indistinguishable is drawing nearer.
Not long ago, XPENG was embroiled in an "AI forgery" controversy. A pornographic video allegedly featuring XPENG at an auto show circulated online, sparking public outrage. Later, police reports revealed it was a fake video generated using AI technology, and the perpetrator was detained for 10 days.
I saw that video too. Rationally, I knew it was fake, but my eyes couldn't discern the difference.
XPENG isn't the only one affected by "Teacher Dou's creations." Upon closer inspection, I found that AI-simulated content has caused at least four major harms.
1. The most direct harm is online fraud.
For instance, three elderly individuals in Huangshi, Hubei, received desperate calls for help from their "grandson," and without suspicion, handed over tens of thousands of yuan in cash because the voice on the phone sounded exactly like their grandchild. Such AI voice-mimicking scams have a far higher success rate and pose greater risks than traditional fraud methods.
2. Counterfeiting and infringement are rampant, with public figures being the hardest hit.
Face-swapping and voice-cloning technologies have made identity theft effortless. Olympic champions Quan Hongchan and Sun Yingsha had their voices cloned for live-stream sales, while actress Wen Zhengrong publicly shared her experience of being impersonated via AI face-swapping in a fake live stream.
An online celebrity drummer named "Xiaolv Yuanyuan" told me that her previous biggest copyright issue was others rebroadcasting her content without permission. Now, her drumming image has been AI-generated into short dramas featuring various singers and idols. The latter is far more complex and costly to address.
3. "False recommendations" mislead consumer decisions.
In Hangzhou, the first case in Zhejiang involving AI-generated commercial content was adjudicated this year, ruling that AI writing tools specifically designed to create fake "recommendation" notes are illegal. These meticulously crafted "user experiences" and "personal recommendations" systematically mislead consumers and undermine the efforts of honest businesses. A similar case occurred in Zhengzhou, where merchants used AI to generate fake positive reviews for fraud, resulting in a ruling of unfair competition.
4. Mass-produced rumors have formed a black-market industry chain.
A few years ago, a notorious "Jiangxi Gang" used AI tools to mass-produce fake content for traffic revenue. I thought most had been dismantled, but this week, news broke that Yantai police uncovered a group scraping keywords like "Xiaomi," "Huawei," and "Li Auto" to generate low-quality, identical "digital waste" via AI, solely for platform traffic revenue, involving over 8,000 accounts.
AI can produce thousands of pieces of false information daily, ranging from fabricating corporate rumors that cause store revenue to plummet to generating fake disaster videos that trigger social panic. This industrial-scale, low-cost rumor-mongering is polluting the information environment and eroding the foundation of social trust, causing even greater harm.
When the production of false content becomes so efficient and inexpensive, the basis of our judgment about the world's authenticity—what we see, hear, and even others' shared experiences—begins to crumble. So, how can we address this?
I can think of five approaches:
1. AI tools must take greater responsibility.
A mandatory labeling system for AI-generated content is the first step toward "isolation." Some initial progress has been made, such as Baidu's "ERNIE Bot" and Douyin's AI effects automatically adding watermarks or labels to indicate their AI origin.
2. Use AI to defeat AI—accelerate the development of AI detection technology.
Some tech companies, both domestically and abroad, are already developing AI-generated content detection tools. Currently, some can effectively identify most AI-generated content by analyzing pixel-level image features and textual logical coherence, providing crucial support for human review.
3. Platforms should open up public participation in error correction.
To address AI "hallucinations" and errors, more open feedback channels are needed. A better system would go beyond simple "likes/dislikes" by establishing dedicated error-reporting channels and supplementing them with expert teams to review complex issues. This transforms user discoveries into collective efforts to optimize models.
4. Implement tiered management for high-risk content.
Platforms must take differentiated measures based on the severity of content risks. For high-risk content like impersonating public figures or fabricating authoritative news, strict measures such as strong warnings, traffic restrictions, or even bans on dissemination could be adopted. For example, adding more prominent warnings to AI-generated "celebrity speech" videos.
5. Explore "verifiable authenticity" technological pathways.
The ultimate solution is to create a "technological ID" for authentic content. In intellectual property, original content can already be bound with tamper-proof digital credentials (e.g., encrypted digital watermarks, blockchain certification) starting from the capture device. Could important news, evidence, or official information adopt such technological firewalls in the future?
In short, when we next joke that something "seems like another piece from Teacher Dou’s collection," this vigilance should not remain just a joke. Isolating AI-generated content from the real world is a problem we must solve; otherwise, humanity will inevitably face an era of ethical collapse.
Governing AI false content is about safeguarding the more precious foundations of trust, authenticity, and dignity in human society.
After all, what determines whether the future leads to prosperity or collapse is not AI's capabilities but our choices.
Doubao has successfully secured a spot at the Spring Festival Gala this year. Shouldn't fraud prevention be the main theme? Technology for good is the right direction. Amidst the noise of AI being used for evil, shouldn't Doubao offer positive solutions during the Spring Festival, such as helping elderly people left behind identify scams, avoiding deception, and improving platform recognition rates for AI content?
AI can integrate into our society, but it must not alter our reality.