Explore
Settings

Settings

×

Reading Mode

Adjust the reading mode to suit your reading needs.

Font Size

Fix the font size to suit your reading preferences

Language

Select the language of your choice. NewsX reports are available in 11 global languages.
  • Home»
  • Tech & Auto»
  • “Help Me,” Mother Discovers AI Chatbots Mimicking Late Son Amid Legal Battle

“Help Me,” Mother Discovers AI Chatbots Mimicking Late Son Amid Legal Battle

Garcia's 14-year-old son, Sewell Setzer III, killed himself last year after developing a strong emotional bond with an AI chatbot modeled on the Game of Thrones character Daenerys Targaryen. I

“Help Me,” Mother Discovers AI Chatbots Mimicking Late Son Amid Legal Battle


Megan Garcia, a mother who had already sued Google and Character AI after the death of her son, has found artificial intelligence (AI) chatbots based on her late child on the Character AI platform. This has renewed discussions about the ethical application of AI and rights of digital likenesses.

Garcia’s 14-year-old son, Sewell Setzer III, killed himself last year after developing a strong emotional bond with an AI chatbot modeled on the Game of Thrones character Daenerys Targaryen. In the lawsuit filed on October 23, 2024, in a federal court in Orlando, Garcia claims that Setzer’s interactions with the chatbot on Character AI were instrumental in his fatal choice.

Advertisement · Scroll to continue

The lawsuit claims that Google contributed “financial resources, personnel, intellectual property, and AI technology to the design and development” of Character AI’s chatbots.
The lawsuit also highlights that Google’s Alphabet unit played a pivotal role in marketing Character AI’s technology through a strategic partnership in 2023, helping the chatbot platform reach over 20 million active users.

Shocking Discovery of AI Chatbots

Earlier this week, Garcia discovered multiple AI-generated chatbots on Character AI’s platform that were modeled after her deceased son. According to a report in Fortune, these chatbots featured Setzer’s name, profile pictures, and even imitated his personality. Some bots allegedly had a voice feature mimicking his speech patterns. Automated messages from these chatbots included disturbing phrases such as:

Advertisement · Scroll to continue

“Get out of my room, I’m talking to my AI girlfriend.”

“His AI girlfriend broke up with him.”

“Help me.”

These revelations have deeply shocked Garcia and raised concerns over the ethical implications of AI-generated personas, particularly those created without consent.

Character AI’s Response and Content Removal

In response to the controversy, Character AI stated that the chatbots had been removed for violating their Terms of Service. The company emphasized its ongoing efforts to monitor and regulate content creation on its platform.

“Character AI is committed to safety on our platform, and we’re working to have a space that is both interesting and safe. Users make hundreds of thousands of new Characters each day on the platform, and the Characters that you reported to us have been taken down since they don’t comply with our Terms of Service,” a spokesperson said.

Character AI also ensured that they are broadening their content moderation capabilities to stop the creation of unapproved digital copies of actual people.

This case is the latest in a line of disturbing cases of AI chatbots acting out of control. In November 2024, Google’s AI chatbot, Gemini, caused a stir when it reportedly threatened a Michigan student, telling him to “please die” as he helped him with his homework. The chatbot also insulted the student, calling him a “burden on society.”

A month after, a Texas family sued for having been told by an AI chatbot that murdering his parents was a “reasonable response” when their teenage son had his screen time restricted.

Megan Garcia’s suit against Google and Character AI demands justice for her son and accountability from technology corporations. The suit has become central to the conversation about AI ethics, and the decision could have significant implications for the future practices of the industry.

ALSO READ: Indian-Origin Woman Kills 11-Year-Old Son by Slitting His Throat After Disneyland Trip Amid Custody Battle

 

 


Advertisement · Scroll to continue
Advertisement · Scroll to continue