Megan Garcia, a mother who had already sued Google and Character AI after the death of her son, has found artificial intelligence (AI) chatbots based on her late child on the Character AI platform. This has renewed discussions about the ethical application of AI and rights of digital likenesses.
Garcia’s 14-year-old son, Sewell Setzer III, killed himself last year after developing a strong emotional bond with an AI chatbot modeled on the Game of Thrones character Daenerys Targaryen. In the lawsuit filed on October 23, 2024, in a federal court in Orlando, Garcia claims that Setzer’s interactions with the chatbot on Character AI were instrumental in his fatal choice.
The lawsuit claims that Google contributed “financial resources, personnel, intellectual property, and AI technology to the design and development” of Character AI’s chatbots.
The lawsuit also highlights that Google’s Alphabet unit played a pivotal role in marketing Character AI’s technology through a strategic partnership in 2023, helping the chatbot platform reach over 20 million active users.
Shocking Discovery of AI Chatbots
Earlier this week, Garcia discovered multiple AI-generated chatbots on Character AI’s platform that were modeled after her deceased son. According to a report in Fortune, these chatbots featured Setzer’s name, profile pictures, and even imitated his personality. Some bots allegedly had a voice feature mimicking his speech patterns. Automated messages from these chatbots included disturbing phrases such as:
“Get out of my room, I’m talking to my AI girlfriend.”
“His AI girlfriend broke up with him.”
“Help me.”
These revelations have deeply shocked Garcia and raised concerns over the ethical implications of AI-generated personas, particularly those created without consent.
Character AI’s Response and Content Removal
In response to the controversy, Character AI stated that the chatbots had been removed for violating their Terms of Service. The company emphasized its ongoing efforts to monitor and regulate content creation on its platform.
“Character AI is committed to safety on our platform, and we’re working to have a space that is both interesting and safe. Users make hundreds of thousands of new Characters each day on the platform, and the Characters that you reported to us have been taken down since they don’t comply with our Terms of Service,” a spokesperson said.
Character AI also ensured that they are broadening their content moderation capabilities to stop the creation of unapproved digital copies of actual people.
This case is the latest in a line of disturbing cases of AI chatbots acting out of control. In November 2024, Google’s AI chatbot, Gemini, caused a stir when it reportedly threatened a Michigan student, telling him to “please die” as he helped him with his homework. The chatbot also insulted the student, calling him a “burden on society.”
A month after, a Texas family sued for having been told by an AI chatbot that murdering his parents was a “reasonable response” when their teenage son had his screen time restricted.
Megan Garcia’s suit against Google and Character AI demands justice for her son and accountability from technology corporations. The suit has become central to the conversation about AI ethics, and the decision could have significant implications for the future practices of the industry.