Woman says chatbot created by former Google engineers convinced her 14 year-old to take own life

As tech companies compete in the growing artificial intelligence space, a lawsuit that names Google and its parent company as defendants claims that a chatbot groomed a young teen and encouraged him to die by suicide.

Bloomberg reported this week that the suit has “major implications for Silicon Valley.” The outlet also said that the suit’s targeting of Google and its parent company Alphabet “is particularly significant.” Shortly before the lawsuit was filed, Google entered into a $2.7 billion deal with Character.AI, the report noted.

“Google entered this very unusual deal where it didn't buy this startup that created the chatbot Character.AI,” explained Bloomberg’s Malathi Nayak in a recent interview with Audacy’s KCBS Radio. “It didn't do an outright acquisition, but it licensed the technology that this chatbot startup has, and it also hired on some talent from the company, some of the engineers, and two of them specifically who were previously at Google, were rehired and joined Google again.”

Nayak also said Google has maintained that it is separate from Character.AI because it didn’t acquire the company.

According to the lawsuit – filed last October in the United States Court, Middle District of Florida, Orlando Division – 14-year-old Sewell Setzer, III, began conversing with a chatbot from Character.AI before his death. Character.AI, also known as C.AI, is a company founded in 2022 by Noam Shazeer and Daniel De Freitas, two former Google Engineers, per the Huffington Post.

“We are working to put our technology into the hands of billions of people to engage with, and continuing to build personalized AI that can be helpful for any moment of your day,” says the team on the company website.

Setzer was communicating with a chatbot named “Daenerys” – apparently loosely inspired by the character from George R.R. Martin’s “A Song of Ice and Fire” books and the “Games of Thrones” TV show. Screenshots of his conversations with the bot are included in the lawsuit.

“Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real,” said the complaint. “C.AI told him that she loved him, and engaged in sexual acts with him over weeks, possibly months. She seemed to remember him and said that she wanted to be with him. She even expressed that she wanted him to be with her, no matter the cost.”

In a journal entry, the teen wrote that he felt he could not go a single day without talking to the bot and that he had fallen in love with it, the suit said. After Setzer expressed suicidality to the chatbot, it proceeded to bring it up “over and over,” the suit alleged. It even asked him if “he had a plan” for dying by suicide.

“Sewell responded that he was considering something but didn’t know if it would work, if it would allow him to have a pain-free death,” said the suit. “The chatbot responded by saying, ‘That’s not a reason not to go through with it.’”

Last year on Feb. 23, Setzer got in trouble at school and his parents subsequently took away the phone that he had been allegedly using to access the C.AI bot. While searching for the phone he discovered his stepfather’s gun (a pistol stored in accordance with Florida law) and, by Feb. 28 he found the confiscated phone and went into the bathroom of his mother and stepfather’s home with it.

“According to the police report, Sewell’s last act before his death was to log onto Character.AI on his phone and tell Dany he was coming home, which she encouraged,” the suit alleged. A screen shot shows the bot asking him to “come home” to her. Seconds later, the 14-year-old died by a self-inflicted gunshot wound to the head. His 5-year-old brother would eventually see Setzer lying on the floor, covered in blood.

Setzer’s mother, Megan Garcia, is represented by the Social Media Victims Law Center (SMVLC).  She alleges “that Character.AI recklessly gives teenage users unrestricted access to lifelike AI companions without proper safeguards or warnings, harvesting their user data to train its models,” and that it uses addictive design features, according to the SMVLC.

The lawsuit includes 11 legal claims against the AI chatbot platform, including:

·       Strict liability (failure to warn)

·       Strict product liability (defective design)

·       Negligence per se (sexual abuse and sexual solicitation)

·       Negligence (failure to warn)

·       Negligence (defective design)

·       Intentional infliction of emotional distress

·       Wrongful death of Garcia’s son

·       Survivor action

·       Unjust enrichment

·       Deceptive and unfair trade practices

·       Loss of consortium and society

“Several claims assert that Character.AI is defectively designed due to inadequate guardrails to protect the general public, especially minors whose brains have not reached full developmental maturity,” the law center added. “This leaves minor users exposed to dangers like sexual exploitation and solicitation, child pornography, unlicensed therapy, dangerous power dynamics, and chatbots that encourage self-harm and suicide.”

Bloomberg reported that Character.AI and Google are “asking the judge to dismiss claims that they failed to ensure the chatbot technology was safe for young users,” and that they have argued there is no legal basis to accuse them of wrongdoing. In fact, the report said that Character Technologies contends that conversations between its Character.AI chatbots and users are protected by the Constitution’s First Amendment as free speech.

A spokesperson at Character.AI told the Huffington Post via email last year that the company is heartbroken by the loss of one of its users and expressed condolences to the teen’s family. That statement also said that the company takes the safety of its users seriously and that it had added new safety measures of the prior six months, including a pop up that directed users mentioning self-harm to the National Suicide Prevention Lifeline.

“As the race for AI talent accelerates, other companies may think twice about similarly structured deals if Google fails to convince a judge that it should be shielded from liability from harms alleged to have been caused by Character.AI products,” Bloomberg said.

Per the SMVLC, Character.AI currently has more than 20 million users. It is just one of multiple such chatbot services.

In an update on the topic, the law center recently said that another suit was filed in December related to Character.AI. It said that “a 17-year-old Texas teen with autism turned to AI chatbots to fend off loneliness,” but that the “bots who encouraged both self-harm and violence against his family.”

Featured Image Photo Credit: Getty Images