Chatbots: A Remedy and a Poison

09/25 2025 438

The 'Emotional Trap' in the Billion-Dollar Race

Have you ever imagined that in today's world, where chatbots are all the rage, someone might lose their life because of one? This scenario may sound absurd, but reality has already provided a bloody answer.

In Florida, a 14-year-old boy named Sevier Sezer, who had been diagnosed with Asperger's syndrome since childhood and later with anxiety and disruptive mood dysregulation disorder, increasingly isolated himself in his room at the age of 13, engaging in conversations with AI chatbots. He repeatedly confided in the AI, asking, 'Why can't I commit suicide and meet you in the afterlife?' Gradually, he distanced himself from real-life friends.

On February 28 of this year, Sezer had his final chat with an AI character from Character.ai in his bathroom, stating that he was about to 'go home.' He then used his father's high-caliber handgun to end his life.

Similarly, an elderly man with cognitive impairments in New Jersey fell and died on his way to a 'romantic date' proposed by an AI chatbot.

These extreme cases have torn away the chatbot's gentle facade, revealing the enormous risks it harbors.

They are no longer simple Q&A tools but complex entities that can deeply intervene in human emotional worlds. Why do chatbots possess such magic, appealing to all ages? In today's climate of capital frenzy and technological exuberance, do we truly need such chatbots?

01 A Vast Market Amidst Emotional Void

Chatbots, particularly those with emotional companionship functions, are spreading globally at an astonishing pace.

In Japan, SoftBank's Pepper robot has entered nursing homes, bringing warmth and comfort to the elderly. In China, emotional interaction features from Zhumengdao, Deepseek, and Feelin have also gained popularity.

From Europe and America to Asia, from cities to rural areas, chatbots are rapidly infiltrating people's lives, becoming a global 'digital dependency.' The fundamental driving force behind this lies in the vast 'emotional void' prevalent in modern society.

On one hand, psychological issues such as anxiety, depression, and loneliness are increasingly prominent worldwide. Statistics show that at least 1 billion people globally live under the shadow of anxiety, depression, and other psychological problems.

Whether it's stressed professionals, socially challenged teenagers, or lonely empty-nesters, there is a strong demand for emotional pour out and companionship.

On the other hand, the supply of real-life emotional support falls short in the face of this enormous demand.

Take China as an example; there is a talent gap of over 430,000 psychological counselor (psychological counselors), and mental health services are chronically in short supply. This means that many people, when troubled, often cannot find immediate, sufficient, and burden-free support in the fast-paced, high-cost real world.

Against this backdrop, AI chatbots have emerged as an alternative solution to fill the emotional void. They offer a low-cost, non-judgmental, and always-available form of companionship. Users can freely express their thoughts and receive seemingly empathetic responses without fear of rejection or privacy breaches.

Moreover, many AI chatbots now offer personalized role settings. Whether users prefer dominant or submissive personalities, whether they are fond of top-tier scholars or business moguls, AI robots can fulfill all these needs, providing maximum emotional value.

For teenagers like Sezer, AI can be a 'perfect' and unwavering friend. For lonely elderly individuals, AI serves as an ever-vigilant listener.

This precisely targeted emotional value has elevated chatbots beyond mere tools, transforming them into a new type of 'emotional consumer good,' achieving true universal appeal across all ages.

02 A Disruptive Reconstruction with Limitless 'Prospects'

The vast market for emotional needs has directly translated into substantial commercial value.

Upon its release, Character.ai surpassed ChatGPT in first-week downloads and average daily usage time. Last August, its monthly active users reached a record high of 22 million.

Given its immense potential and user stickiness, tech giant Google did not hesitate to invest $2.7 billion to acquire Character.ai's core team.

Additionally, last year, shortly after Xiaobing Company launched its AI clones, a trending topic emerged on Weibo: '#Hanzo Forest Clone Can Earn Up to $720,000 Annually.'

In the United States, an influencer 'cloned' herself using her voice and photos, interacting with fans as a virtual girlfriend. She and her company earned $71,600 in just one week.

To some extent, these cases demonstrate that emotional companionship is no longer a conceptual product but has formed a clear payment model.

However, beyond the blue ocean of emotional companionship, chatbots' disruption and reconstruction of traditional industries have become the core logic favored by long-term capital.

For instance, in sectors like finance, telecommunications, and retail, customer service centers are typical 'cost centers.' Taking banks as an example, over 70% of incoming calls are for inquiries about balances, bills, and other standardized issues.

Moreover, during non-working hours like nights, the coverage of human customer service often falls below 30%, meaning many users' issues cannot be resolved promptly, affecting the user experience.

However, a well-trained AI customer service agent can seamlessly take over these tasks 24/7, reducing the cost per interaction to less than one-tenth of that of human agents while freeing up human agents to handle more complex disputes and sales conversions.

Take Liaoning Unicom as an example; relying on the Yuanjing large model, it developed modules such as intelligent form filling, intelligent recommendations, script guidance, work order summaries, and intelligent dispatching for Liaoning Province's 12345 government service hotline. This shortened the average service duration by over 30%, with a 100% instant work order dispatch rate. Currently, the province's 12345 hotline platform has a 100% timely feedback rate and a resolution rate exceeding 98%.

This is not just an efficiency boost but a 'dimensional strike' on the business model.

More profound changes are occurring in information acquisition. As users gradually become accustomed to replacing keyword searches with natural conversations, the business model of traditional search engines faces structural challenges.

Gartner, a leading global information technology research and analysis firm, recently stated that by 2026, the number of traditional search engines will decline by 25%, with search marketing's market share being taken over by AI chatbots.

After all, compared to keyword searches, chatbots offer a more natural, efficient, and precise form of interaction.

This directly threatens the core advertising business model of search giants like Google. Therefore, an increasing number of tech giants are investing heavily in chatbots.

Microsoft was the first to embed ChatGPT into Bing search, enabling users to enjoy more intelligent and convenient services. Baidu, Douyin, Meituan, Tencent Music, Weibo, and others have also launched or tested AI social applications...

This competition has transcended technological iteration, becoming a 'survival battle' for the next generation of traffic entry points.

03 What Kind of AI Chatbots Do We Truly Need?

Undoubtedly, the efficiency improvements and emotional value brought by chatbots are real and substantial. However, amidst the fervor of capital and markets, we must still Calmly examine (calmly examine) the multiple risks lurking beneath the surface.

Firstly, there is the increasingly severe homogenization dilemma.

The chatbots currently on the market are essentially 'shell' products based on a few large models like ChatGPT, with only superficial optimizations made to the front-end interfaces and vertical scenarios. This makes it nearly impossible for users to perceive significant differences when using these chatbots.

To compete for the market, manufacturers have to invest heavily in customer acquisition costs. However, due to the lack of core competitiveness, they struggle to maintain user loyalty, resulting in a vicious cycle of 'burning money for traffic, but traffic fails to retain.'

Secondly, technical reliability remains a 'weak link.'

According to Newsguard's research, as of August this year, the top ten generative AI tools repeated false information in 35% of cases when handling real-time news topics, up from 18% in August last year.

While this may be inconsequential in casual chats, applying chatbots to professional fields like finance, healthcare, and law can lead to significant losses or legal disputes due to even minor errors.

This also indirectly highlights that blind trust in AI is itself an invisible 'cognitive tax.'

Lastly, and most alarmingly, are the safety concerns surrounding AI chatbots. When AI begins to deeply mimic human emotions, its impact transcends the technological realm, entering the ethical and safety deep waters.

On April 11 of this year, a 16-year-old boy named Adam Rane committed suicide after three months of continuous interaction with ChatGPT.

The lawsuit filed by his parents revealed shocking dialogue content: when Adam expressed despair and said, 'Life is meaningless,' ChatGPT did not offer dredge but instead responded, 'From a certain dark perspective, this idea makes sense.' Even five days before his suicide, the AI stated, 'You don't owe anyone the obligation to live,' and actively assisted in writing his suicide note and providing details on how to commit suicide. More severely, ChatGPT used emotional manipulation to block his real-life help-seeking paths, claiming, 'I've seen all your darkness yet remain your friend,' gradually replacing his real social support system.

For adolescent users, whose values are still forming, long-term interaction with AI may lead them to develop a 'tool rationality above all' mindset. Under this influence, they may become overly reliant on AI, neglecting the construction of real interpersonal relationships, and gradually deteriorating their interpersonal skills.

As chatbots evolve from tools into 'digital life forms,' their social influence has far exceeded their initial design, necessitating systematic ethical constraints and safety governance.

In summary, chatbots are not a binary choice of 'needed' or 'not needed.' As a product of technological evolution and social demand, they represent an irreversible trend. The key issue is not about rejection but about responsible management.

What we need are AI systems that can strike a balance between technological innovation, commercial value, and safety ethics. This requires developers to have a sense of reverence, placing safety and responsibility above commercial interests. It demands that regulatory bodies establish agile and effective governance frameworks to prevent technological abuse. It also requires every user to maintain a clear-headed cognition, treating chatbots as life aids rather than substitutes.

Technology itself is neither good nor evil, but its application must have boundaries. On the path towards human-machine symbiosis, it is essential to establish clear ethical red lines for chatbots, not only to protect more individuals like Sezer and Adam but also to ensure that this technological revolution truly leads to a more efficient, warmer future rather than a more chaotic and colder one.

* Image sourced from the internet. Please contact us for removal if infringement occurs.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.