Despite President Trump’s directive to phase out Anthropic‘s AI tools, the U.S. military reportedly used the company’s Claude platform for intelligence and targeting during major airstrikes on Iran. The action followed a dispute over AI safeguards, leading to Anthropic being designated a national security risk, while OpenAI secured a new Pentagon deal. Experts warn the operational cost of replacing embedded AI is severe.
The U.S. Central Command reportedly used Anthropic‘s Claude for intelligence assessments, target identification, and battle simulation during recent strikes on Iran. This occurred hours after President Donald Trump ordered agencies to begin a six-month phase-out of the company’s AI tools.
Trump’s directive followed a breakdown in negotiations between Anthropic and the Pentagon over the use of commercially developed AI. Anthropic CEO Dario Amodei stated the company would not strip safeguards preventing Claude from being used for mass domestic surveillance or fully autonomous weapons.
In response, Trump labeled the company’s stance a “DISASTROUS MISTAKE” on Truth Social. Defense Secretary Pete Hegseth then designated Anthropic a “supply-chain risk to national security,” barring Pentagon contractors from commercial activity with the firm.
Anthropic called the designation “unprecedented” and vowed a legal challenge, noting it had never before been applied to an American company. The company also stated the two disputed restrictions had not, to its knowledge, affected any government mission.
Experts warn the phase-out timeline understates the true cost of replacing an AI model embedded in classified systems. “By the time a model is embedded across classified intelligence and simulation systems, you’re looking at sunk integration costs, retraining, security re-certifications, and parallel testing,” said Midhun Krishna M of TknOps.io.
OpenAI quickly moved to fill the gap, with CEO Sam Altman announcing a Pentagon deal covering classified military networks. Altman said the agreement included strong guardrails, but expressed concern over the government’s handling of the Anthropic dispute on X.
“I think it is an extremely scary precedent, and I wish they handled it a different way,” Altman stated. Meanwhile, nearly 500 employees from OpenAI and Google signed an open letter warning the Pentagon was attempting to pit AI companies against each other.

