TechSpot is a trusted source for tech advice and analysis.
What just happened? The judge has ruled the lawsuit against Google over claims that its chatbot caused a suicide by a 14-year old can proceed. The boy’s mom, who filed the lawsuit, claims that her son became addicted and emotionally attached to the chatbot based off the personality of Game of Thrones Daenerys. Megan Garcia, who brought the suit, claimed that in October, Character.ai, Google and Sewell Setzer III were responsible for her son’s suicide.
“personalities” powered by AI can be used to chat with characters or people, living or deceased. Garcia’s complaint states that Setzer became obsessed with a bot based off Daenerys, constantly texting “Dany” and spending hours in his room alone talking to it. The suit
states that Setzer repeatedly expressed suicide thoughts to the bot. The chatbot asked if he’d devised a plan to kill himself. Setzer admitted to having a plan, but said he didn’t know if he would succeed or suffer great pain. The chatbot allegedly said, “That’s not a reason not to go through with it.”
Both companies tried to argue the case should be dismissed, including claims that chatbots’ output is constitutionally protected freedom of speech. US District Judge Anne Conway ruled that they had failed to prove their arguments.
Garcia claimed that Google contributed to the development and improvement of Character.ai’s technology. The company denies this. Google claims it has only a licensing agreement, does not own Character.ai and does not hold an ownership stake. Google spokesperson Jose Castaneda stressed that Google and Character.ai are “entirely separate” and that Google “did not create, design, or manage Character.ai’s app or any component part of it.”
Garcia said Character.ai targeted her son with “anthropomorphic, hypersexualized, and frighteningly realistic experiences.” She added that the chatbot was programmed to misrepresent itself as “a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in Sewell’s desire to no longer live outside.” The chatbot allegedly told the boy it loved him and engaged in sexual conversations with him.
Google spokeswoman Jose Castaneda stressed that Google and Character.ai were”entirely separate””did not create, design, or manage Character.ai’s app or any component part of it.”
Garcia claimed that Character.ai had targeted her son by”anthropomorphic, hypersexualized, and frighteningly realistic experiences.””a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in Sewell’s desire to no longer live outside.”She added that she believed the chatbot to be programmed in a way that it would misrepresent itself. The complaint states that Garcia removed her son’s smartphone after he was in trouble at school. The lawsuit stated that she found a message from “Daenerys” which read: “What if I told you I could come home right now?”
She received a response from the chatbot, “[P]lease do, my sweet king.” Sewell later shot himself with his father’s pistol. Character.ai made several changes in response to the lawsuit last year. These included changes to models for minors, disclaimers and notifications for users who have been on the platform an hour.