10/27 2024
572
A 14-year-old boy in the United States committed suicide after chatting with AI in a "passionate love," who should be held accountable?
On October 22th local time, the Orlando District Court heard a landmark case in Florida: Megan Garcia filed a lawsuit against Character.ai, accusing the company of mismanagement, resulting in its chatbot product exposing teenagers to inappropriate pornographic content, thereby subjecting them to sexual exploitation and inducement.
According to court documents, Megan's 14-year-old son, Xavier Cedeño, has been obsessed with interacting with multiple AI characters on Character.ai since last year. He even saved his lunch money to pay for the monthly subscription fee for AI chats, which distracted him from focusing in class. The tragedy occurred on February 28th this year, when Xavier pointed a gun at his head and pulled the trigger after his last conversation with the AI.
Megan's allegations against Character.ai include negligent homicide, negligence, and product safety hazards. Although Character.ai's terms of service allow US teenagers over 13 years old to use its AI products, Megan argues that these chatbots excessively expose minors under 18 to inappropriate content such as pornography, gore, and violence.
Compared to chatbots like ChatGPT and Claude, Character.AI users have more freedom to customize their virtual chat partners and guide their behavior. These virtual personas can even be historical figures like Winston Churchill or William Shakespeare, or contemporary celebrities like Taylor Swift.
Previously, this setup has sparked legal controversies. Some entertainment stars have sued the company, claiming that it created AI characters without their consent. Furthermore, some users maliciously created AI personas based on victims of historical murders.
Regarding the current case, Character.ai declined to comment and did not disclose the number of users under 18 years old. However, upon reviewing Xavier's chat logs, the company found that some of the "most explicit" conversations were actually manually modified by the user. The platform allows users to customize AI character responses, which are labeled as "modified" once edited.
In response, the company issued an apology statement, noting that all chat characters have built-in intervention mechanisms for suicidal ideation, triggering pop-ups with information about suicide prevention hotlines. Additionally, to protect minor users, the company has implemented special measures such as reminders after an hour of usage and prompts at the start of each chat session reminding users they are communicating with an AI, not a real person.
Editor: Ding Li