WASHINGTON (AP) — President Donald Trump said Friday that he was ordering all federal agencies to stop using Anthropic's technology following the company’s unusually public dispute with the Pentagon over artificial intelligence safety.
Defense Secretary Pete Hegseth also said he was designating Anthropic as a supply chain risk, a move that could prevent U.S. military vendors from working with the company.
Hegseth’s remarks, delivered in a social media post, came shortly after the Pentagon’s deadline for Anthropic to allow unrestricted military use of its AI technology or face consequences — and nearly 24 hours after CEO Dario Amodei said his company “cannot in good conscience accede” to the Defense Department’s demands.
Trump’s comments came just over an hour before the Pentagon’s deadline for Anthropic to allow unrestricted military use of its AI technology or face consequences — and nearly 24 hours after CEO Dario Amodei said his company “cannot in good conscience accede” to the Defense Department's demands.
Calling the company “Leftwing nut jobs,” the president said Anthropic made a mistake trying to strong-arm the Pentagon. Trump wrote on Truth Social that most agencies must immediately stop using Anthropic's AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.
“We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote.
At issue in the defense contract was a clash over AI’s role in national security. Anthropic had said it sought narrow assurances from the Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.”
Anthropic, maker of the chatbot Claude, could afford to lose the contract. But an ultimatum this week from Hegseth posed broader risks at the peak of the company’s meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable startups. Military officials had warned Anthropic earlier in the week they could deem it "a supply chain risk,” a designation typically stamped on foreign adversaries that could derail the company’s critical partnerships with other businesses.
Trump also said Anthropic could face “major civil and criminal consequences” if it’s not helpful in the phase-out period. Anthropic didn’t immediately reply to a request for comment on the new developments.
The president’s decision was preceded by hours of top Trump appointees from the Pentagon and the State Department taking to social media to criticize Anthropic and slam their reluctance to acquiesce to the administration’s demands.
Sean Parnell, the Pentagon’s top spokesman, posted on social media that the Pentagon wants to “use Anthropic’s model for all lawful purposes,” but he and other officials haven’t detailed how they want to use the technology. Tesla CEO Elon Musk also sided with Trump’s administration on Friday, saying on his social media platform X that “Anthropic hates Western Civilization.”
Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, called the entire negotiation into question and said the move “combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”
The dispute stunned AI developers in Silicon Valley, where a growing number of workers from Anthropic’s top rivals, OpenAI and Google, voiced support for Amodei’s stand in open letters and other forums.
“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the open letter from some OpenAI and Google employees says. “They’re trying to divide each company with fear that the other will give in.”
And in a surprise move from one of Amodei’s fiercest rivals, OpenAI CEO Sam Altman on Friday sided with Anthropic and questioned the Pentagon’s “threatening” move in a CNBC interview, suggesting that OpenAI and most of the AI field share the same red lines. Amodei once worked for OpenAI before he and other OpenAI leaders quit to form Anthropic in 2021.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” Altman told CNBC.
Retired Air Force Gen. Jack Shanahan wrote on a social media that “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.”
Shanahan said Claude is already being widely used across the government, including in classified settings, and Anthropic’s red lines are “reasonable.” He said the AI large language models that power chatbots like Claude are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.
“They’re not trying to play cute here,” he wrote Thursday on LinkedIn.
The move is likely to benefit Elon Musk’s competing chatbot, Grok, which the Pentagon plans to give access to classified military networks, and could serve as a warning to two other competitors, Google and OpenAI, that also have contracts to supply their AI tools to the military.
___
O'Brien reported from Providence, Rhode Island.