PA Attorney General leads 35-state push against xAI over deepfake concerns

MIAMI, FLORIDA - JANUARY 26: In this photo illustration, the Grok website is seen on a computer screen on January 26, 2026 in Miami, Florida. The European Commission has launched an investigation into Elon Musk's X over concerns its AI tool Grok was used to create sexualized images of real people. (Photo illustration by Joe Raedle/Getty Images)
MIAMI, FLORIDA - JANUARY 26: In this photo illustration, the Grok website is seen on a computer screen on January 26, 2026 in Miami, Florida. The European Commission has launched an investigation into Elon Musk's X over concerns its AI tool Grok was used to create sexualized images of real people. Photo credit Photo illustration by Joe Raedle/Getty Images

HARRISBURG, PA – Pennsylvania Attorney General Dave Sunday is leading a bipartisan coalition of 35 attorneys general demanding that Elon Musk’s AI company, xAI, immediately strengthen safeguards for its chatbot, Grok. The group warns that the tool is being used to generate nonconsensual sexual imagery and child sexual abuse material (CSAM) with "minimal effort."

In a formal letter sent to the company, the coalition expressed alarm over how Grok’s current infrastructure allows for the rapid creation and sharing of harmful, exploitative content. The attorneys general argue that the lack of robust guardrails has left victims vulnerable to harassment with almost no way to prevent or remove the damaging material once it enters the digital space.

Key concerns from the coalition:

Ease of Use: The technology allows users to generate hyper-realistic, harmful images without technical expertise.

Victim Impact: The group highlights that those targeted by deepfakes face significant psychological harm and have few tools for recourse.

Public Safety: The bipartisan group is demanding concrete technical changes to ensure the platform cannot be weaponized for sexual exploitation.

"The technology has enabled harassment and exploitation, leaving victims with little ability to prevent or remove damaging material," the coalition stated in their demand for action.

The move marks a significant escalation in state-level pressure on AI developers to take accountability for the content their models produce.

Featured Image Photo Credit: Photo illustration by Joe Raedle/Getty Images