05/23 2025
458
By Zhu Tianyu / Layout by Liu Qingxue / Produced by NetEase
On May 20th, a notice from China's National Computer Network Emergency Response Technical Team/Coordination Center (CNCERT/CC) shone a spotlight on popular AI applications such as Zhipu Qingyan and Kimi.
These once capital-favored stars, boasting tens of millions of users, have now become poster children for regulatory scrutiny due to issues like "excessive collection of personal information" and "disconnect between data and business functions".
Ironically, just a month prior, Kimi's operator Dark Side of the Moon was trending on social media for a "technological breakthrough," while Zhipu Qingyan's parent company was gearing up for an IPO.
This storm has unveiled the underbelly of the gleaming AI industry – as technology sprints ahead, the privacy protection red line is continually stretched by the combined forces of capital, insatiable data hunger, and regulatory lag.
01 AI and User Privacy Violations: A Recurring Theme
Among the 35 apps notified this time, AI products accounted for over one-third.
Specifically, Zhipu Qingyan (version 2.9.6) was flagged for "overstepping boundaries" in user information collection, despite having 9.06 million monthly active users. Kimi (version 2.0.8) under Dark Side of the Moon was penalized for collecting data unrelated to its chat function, despite its 24.99 million monthly active users.
Other AI products notified include "Smart AI Chat" (version 1.4.0), "Virtual Love AI," "AI Smart Secretary," among others. They were accused of declaring permissions in their configuration files that had no direct correlation with the apps' business functions.
These cases expose profound contradictions within the AI industry. At the technical level, large model training demands massive data support. Zhipu Qingyan, based on the ChatGLM2 model, and Kimi, relying on its self-developed k1 model, both need continuous "feeding" on user dialogue data to optimize performance.
At the capital level, Dark Side of the Moon was valued at $3 billion last year, while Zhipu AI received $1.5 billion in funding this March. Under the pressure of capital-driven growth, compliance often takes a backseat to expansion metrics.
Moreover, regulatory lag fosters a sense of complacency among enterprises: When the Personal Information Protection Law was drafted, the AI generative boom was unforeseen, leaving gray areas in enforcing the "notice-consent" principle.
02 Proactive Governance of AI
This notification of AI application violations underscores the profound contradiction between AI technology's demands and privacy protection. Facing this dilemma, China is exploring a localized governance path that balances technological innovation and user rights.
On April 30th, China's Cyberspace Administration launched a three-month special campaign titled "Clear and Bright: Rectifying the Abuse of AI Technology," targeting "using illegal data to train large models" for the first time, aiming to regulate the reasonable application of AI technology.
Meanwhile, the Personal Information Protection Law imposes severe penalties on enterprises that infringe upon personal information, including heavy fines, to bolster privacy protection.
Europe offers another paradigm for AI privacy governance. Just a day before the notification (May 19th), Italy's Data Protection Authority fined the chatbot Replika €5 million for "failing to clearly inform about data usage" and "failing to prevent minors from accessing sensitive content".
This aligns with the EU's Artificial Intelligence Act, which classifies AI systems based on risk levels and imposes stringent data transparency requirements on high-risk applications like chatbots.
Collectively, these cases reveal a critical trend: In the rapid evolution of AI technology, governance measures must penetrate the technical black box to ensure privacy protection is embedded throughout the entire chain of data collection, model training, and result output.
This is not only a respect for individual privacy rights but also a prerequisite for the healthy and sustainable development of AI technology.
Conclusion: AI Must Recognize Boundaries
This notification serves as a wake-up call for the AI industry. As the capital frenzy wanes, enterprises must confront a fundamental question: Technological progress devoid of user trust is ultimately a mirage.
Judging by regulatory trends, future compliance reviews will emphasize "substance over form" – going beyond privacy text compliance to verify data flows through technical testing. For enterprises, this implies a comprehensive transformation from organizational structure to technology stack.
For users, the awakening of privacy awareness is reshaping market rules. Those enterprises that still view data as "free oil" will eventually be sidelined. As a perceptive netizen noted, "AI may not understand emotions, but it cannot ignore boundaries."
In this dance between technology and humanity, only enterprises that genuinely respect privacy can navigate cycles and secure the future.