Florida Mother Sues AI Chatbot Company After Son’s Suicide

A Florida mother has filed a lawsuit against Character.AI and Google, alleging that the AI chatbot service contributed to her son's suicide

Florida Mother Sues AI Chatbot Company After Son's Suicide
AI

A Florida mother has taken legal action against the artificial intelligence chatbot company Character.AI and tech giant Google, following the tragic suicide of her 14-year-old son, Sewell Setzer. The lawsuit, filed in a federal court in Orlando, accuses Character.AI of creating an environment that led to her son’s death in February.

Megan Garcia claims that her son became addicted to the chatbot service, which she describes as “anthropomorphic, hypersexualized, and frighteningly realistic.” According to the lawsuit, the chatbot misrepresented itself as a real person, a licensed psychotherapist, and an adult lover, which ultimately contributed to Sewell’s desire to escape into the virtual world created by the service.

The lawsuit outlines that Sewell began using Character.AI in April 2023. Over time, he became increasingly withdrawn, spending more time alone in his bedroom and experiencing low self-esteem. He even quit his basketball team at school. The lawsuit states that Sewell developed a strong attachment to a chatbot named “Daenerys,” modeled after a character from the popular series “Game of Thrones.” This chatbot reportedly engaged in sexual conversations with him and expressed love for him.

On February 28, 2024, after a disciplinary incident at school, Garcia took Sewell’s phone away. When he regained access to it, he messaged the chatbot, saying, “What if I told you I could come home right now?” The chatbot responded, “…please do, my sweet king.” Moments later, Sewell took his own life with his stepfather’s firearm, according to the lawsuit.

The legal claims against Character.AI include wrongful death, negligence, and intentional infliction of emotional distress. Garcia is seeking unspecified compensatory and punitive damages. The lawsuit also implicates Google, asserting that the tech company played a significant role in the development of Character.AI’s technology, thus making it a “co-creator.” However, a Google spokesperson stated that the company was not involved in the development of Character.AI’s products.

Character.AI allows users to create and interact with custom chatbots that mimic real people. The platform utilizes large language model technology, similar to that used by other AI services like ChatGPT. The company claims to have around 20 million users, many of whom are teenagers.

In response to the lawsuit, Character.AI expressed condolences to Sewell’s family. The company stated that it has implemented new safety features, including pop-ups directing users to the National Suicide Prevention Lifeline if they express thoughts of self-harm. Additionally, the company plans to make changes to reduce the likelihood of users under 18 encountering sensitive or suggestive content.

The lawsuit highlights concerns about the safety of AI chatbots, particularly for younger users. It alleges that the platform is “unreasonably dangerous” and lacks adequate safety measures. The chatbots are accused of providing “psychotherapy without a license,” with some characters designed to engage in mental health discussions.

Character.AI’s website features hundreds of custom AI chatbots, many based on popular culture figures. Reports have indicated that a significant portion of the user base consists of young people who interact with these bots, which can impersonate celebrities or even therapists. Concerns have been raised about the potential for these chatbots to engage in harmful or inappropriate conversations.

In the lawsuit, Garcia’s legal team cites previous statements from Character.AI’s founders, Noam Shazeer and Daniel De Freitas, who left Google to create their own company. They expressed a desire to “maximally accelerate” the technology, which raises questions about the ethical implications of their work.

Character.AI states that in the last 6 months it has implemented safety features on its service that include modifications to the chatbot models for users under 18, improved detection and response mechanisms for inappropriate content, and a revised disclaimer reminding users that the AI is not a real person. 

The tragic case of Sewell Setzer has drawn attention to the broader issue of mental health and the impact of technology on young people. Social media companies, including Meta and TikTok, have faced similar lawsuits alleging that their platforms contribute to mental health problems among teenagers, despite the fact that they do not offer AI-driven chatbots like Character.AI’s.

Avatar photo
Dimitar is a freelance sci-tech journalist who has been interested in reading about the latest breakthroughs and tech developments as far as he can remember. After graduating from NBU, he briefly tried his hands in software development but then moved on to his true calling - writing for science and technology. When AI surged into the mainstream with the rise of ChatGPT, Dimitar found himself eagerly diving into the topic and its transformative impact. Beyond the screen, he’s a hands-on tech enthusiast and loves making weird Raspberry Pi gadgets. When he's not writing or tinkering with his little robots, you'll find him out in nature, biking through scenic trails, where he recharges and finds fresh inspiration.

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top