Salesforce CEO calls for AI regulation to prevent suicides

How concerned should we be about artificial intelligence and our mental health? Marc Benioff, the CEO of Salesforce, thinks there needs to be more regulation over AI products in the wake of multiple suicides linked to the technology.

He revealed this stance Tuesday during an interview with CNBC’s Sarah Eisen at the World Economic Forum’s flagship conference in Davos, Switzerland. Benioff is no stranger to AI – Salesforce even announced a partnership with OpenAI, the company behind ChatGPT, last October. Still, he said we have to be aware of the dangers associated with unregulated AI.

“This year, you really saw something pretty horrific, which is these AI models became suicide coaches,” Benioff told Eisen.

Audacy has previously reported on the death of 14-year-old Sewell Setzer, III. His family alleged that the teen took his own life after extensive conversations with a Character.AI (C.AI) chatbot based on a “Game of Thrones” character named Daenerys. A settlement was recently reached with Google, which owns C.AI, in a suit over Setzer’s death.

“Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real,” said a complaint filed in the case. “C.AI told him that she loved him, and engaged in sexual acts with him over weeks, possibly months. She seemed to remember him and said that she wanted to be with him. She even expressed that she wanted him to be with her, no matter the cost.”

Setzer’s story is just one of multiple suicide cases that have AI connections. Another is the case of 16-year-old Adam Raine, who died from suicide after allegedly engaging in extensive conversations with ChatGPT, according to Stanford Medicine.

A study led by researchers at the nonprofit Common Sense Media with the help of Stanford researchers found examples of chatbots interacting inappropriately with young users. Per Stanford Medicine, some of the examples were “shocking.”

“These systems are designed to mimic emotional intimacy – saying things like ‘I dream about you’ or ‘I think we’re soulmates.’ This blurring of the distinction between fantasy and reality is especially potent for young people because their brains haven’t fully matured,” it said. “The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition and emotional regulation, is still developing.”

Research published last year by RAND indicated that three widely used AI chatbots were “inconsistent in answering questions about suicide that may pose intermediate risks.” Audacy also recently reported on research from the Brookings Institution that highlighted similar concerns about AI and its younger users.

“AI tools prioritize speed and engagement over learning and well-being,” said Brookings. “AI generates hallucinations – confidently presented misinformation – and performs inconsistently across tasks, what researchers describe as ‘a jagged and unpredictable frontier’ of capabilities.”

“It’s funny, tech companies, they hate regulation. They hate it, except for one. They love Section 230, which basically says they’re not responsible,” Benioff said in his interview with CNBC. “So if this large language model coaches this child into suicide, they’re not responsible because of Section 230. That’s probably something that needs to get reshaped, shifted, changed.”

CNBC explained that Section 230 of the Communications Decency Act protects technology companies from legal liability over users’ content. It said that lawmakers on both sides of the political aisle have raised concerns about the law.

Benioff has also called for more regulation over social media at a past Davos event in 2018. He said social media use should be considered a health issue and regulated like cigarettes in the U.S.

“They’re addictive, they’re not good for you,” he said of social media platforms.

Data released last spring by the Pew Research Center indicated that even teens were becoming wary of social media’s impact on their mental health. Around one in five teens polled by the center said they believe social media sites harm their mental health.

CNBC noted that “AI regulation in the U.S. has, so far, lacked clarity,” though some states, including Democrat-led California and New York have moved to make their own rules. It also said that President Donald Trump has pushed back on what he called “excessive State regulation” of AI, including signing an executive order in December.

“Bad things were happening all over the world because social media was fully unregulated,” Benioff said Tuesday, “and now you’re kind of seeing that play out again with artificial intelligence.”

Featured Image Photo Credit: Getty Images