Catenaa, Monday, February 16, 2026- The Pentagon is close to cutting ties with Anthropic and may label the AI company a supply chain risk due to its tech restrictions.
Axios reported that the breakdown follows months of contentious negotiations about how the military can use the Claude tool.
In particular, Anthropic wants to ensure its AI isn’t used to spy on citizens on a large scale or to develop weapons that can be deployed without human involvement, the article said.
The government wants to be allowed to use Claude for “all lawful purposes,” it said.
If the AI company is deemed a supply chain risk, any company that wants to do business with the military will have to cut ties with Anthropic, Axios said.
“The Department of War’s relationship with Anthropic is being reviewed,” Pentagon spokesman Sean Parnell said in an emailed statement. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
An Anthropic spokesperson told Axios it was having “productive conversations, in good faith,” with the Pentagon and said the company is committed to using AI for national security.
Anthropic won a two-year agreement with the Pentagon last year that involved a prototype of AI’s Claude Gov models and Claude for Enterprise. The Anthropic negotiations may set the tone for talks with OpenAI, Google, and xAI, which aren’t yet used for classified work, Axios said.
Anthropic, founded by former OpenAI researchers, positions itself as a more responsible AI company that aims to avoid any catastrophic harms from the advanced technology.
