Catenaa, February 15, 2026 – The Pentagon reportedly used Anthropic’s AI model Claude in the raid that captured Venezuela’s former president Nicolás Maduro. The deployment underscores tensions over AI in national security.
According to The Wall Street Journal, Claude was accessed through a Pentagon contract and partnership with Palantir Technologies. Palantir’s platforms often grant classified access to AI tools for defence purposes.
Anthropic’s usage guidelines explicitly bar its model from supporting violence, weapons development, or surveillance. The company has said it cannot confirm specific operational use.
The mission to capture Maduro took place in Caracas earlier this year. Special forces struck key targets and brought Maduro and his wife to the United States.
The reported use of Claude sparked controversy inside the Pentagon. Defence officials are reportedly debating whether to continue a roughly $200 million contract with Anthropic.
The dispute reflects a broader clash over AI safety and military necessity. The Pentagon wants unrestricted access to AI models for weapons, battlefield operations, and intelligence work. Anthropic has resisted such broad usage.
Critics say using commercial AI tools in combat scenarios raises ethical and legal questions. They argue that guidelines for autonomous systems must be stricter to prevent misuse.
Supporters counter that advanced AI can improve planning, intelligence analysis, and operational efficiency. They see AI as a tool to maintain technological edges in warfare.
The episode highlights growing pressure on AI developers. Firms must balance national security partnerships with public expectations for ethical use. This balancing act will shape future AI contracts and regulation.
As military reliance on AI grows, governments and tech companies must clarify ethical and legal boundaries. The Maduro raid serves as a high-profile flashpoint in that debate.
