The US military utilized Anthropic’s Claude AI to support a major air strike on Iran, mere hours after the Trump administration ordered federal agencies to cease using the company’s systems. According to reports, military commands employed the model for intelligence analysis and targeting. This occurred despite a new Pentagon directive labeling Anthropic a security risk after it refused to grant unrestricted military use of its artificial intelligence.
The US military reportedly used Anthropic during a major air strike on Iran, only hours after President Donald Trump ordered federal agencies to halt use of the company’s systems. Military commands, including US Central Command, used the Claude AI model for operational support such as intelligence analysis and running battlefield simulations, according to people familiar with the matter.
This incident demonstrates how deeply advanced AI systems are embedded in defense operations. The administration had just instructed agencies to stop working with Anthropic and directed the Defense Department to treat it as a potential security risk. The order followed broken contract talks after Anthropic refused to grant unrestricted military use of its AI for any lawful scenario.
Anthropic had previously secured a multiyear Pentagon contract worth up to $200 million. Through partnerships involving Palantir and Amazon Web Services, Claude became approved for classified intelligence and operational workflows. The system was also reportedly involved in an earlier mission in Venezuela that resulted in the capture of President Nicolás Maduro.
Tensions intensified after Defense Secretary Pete Hegseth demanded the company permit unrestricted military use of its models. Anthropic CEO Dario Amodei rejected the request, describing certain applications as ethical boundaries the company would not cross. In response, the Pentagon began lining up replacement providers and reached an agreement with OpenAI to deploy its models on classified military networks.
During an interview, Anthropic CEO Dario Amodei said the company opposes using its AI for mass domestic surveillance and fully autonomous weapons. He argued that military decisions should remain under human control rather than be delegated entirely to machines.

