Notebookcheck Logo

Anthropic snubs Pentagon push to open Claude AI for unmanned weapons systems or mass surveillance use

The Textron Aerosonde Mk. 4.7 UAS drone.
ⓘ Textron Systems
The Textron Aerosonde Mk. 4.7 UAS drone.
Anthropic is likely to remain the leading AI company when it comes to the safety and social responsibility of its models. Despite all the potential repercussions, it has turned down the Pentagon's request to remove the guardrails against military use and surveillance inherent to its Claude AI agent.

AI juggernaut Anthropic will let the February 27 deadline set by the Pentagon to strip its Claude model of all safeguards for military use pass. 

According to Anthropic's CEO Dario Amodei, the AI company can't open Claude for running unmanned weapons systems or for mass US citizen surveillance in "good conscience" because it is neither demonstrably safe nor reliable enough to fit the purpose.

Anthropic's Claude is at once one of the leading AI agents out there and the most safety-oriented, with built-in guardrails against malicious use of its AI tools. The U.S. Department of War (DoW), however, demands that the AI models it purchases be devoid of any but its own murky "lawful use" restrictions:

Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological "tuning" that interferes with their ability to provide objectively truthful responses to user prompts

The Department must also utilize models free from usage policy constraints that may limit lawful military applications. Therefore, I direct the CDAO to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use" language into any DoW contract through which AI services are procured within 180 days.

The Pentagon has now threatened Anthropic with dire repercussions that go beyond an impact on its balance sheet. Apart from jeopardizing the current $200 million ceiling contract for providing AI tools to the Pentagon, Anthropic risks being designated a supply chain risk or asked to remove Claude's barriers for military use under the 1950s law designed to force American companies into compliance during the Korean War on national security grounds.

The supply chain risk designation is typically reserved for companies with potential connections to malicious state actors like China's Huawei or Russia's Kaspersky, so placing Anthropic on that list could deal a significant blow to its earning potential. Despite the risk of becoming an AI pariah for the current White House administration, Amodei insists that current "frontier AI systems are simply not reliable enough to power fully autonomous weapons," while "using these systems for mass domestic surveillance is incompatible with democratic values."

Anthropic's Claude was the go-to model when the government first wanted to use AI tools to sift through classified information and was of help when planning the raid that captured Venezuelan strongman Maduro, so Dario Amodei said he hopes that the Pentagon reconsiders its stance on the two red line scenarios it will continue to restrict in its AI models.

Source(s)

Please share our article, every link counts!
Mail Logo
Google Logo Add as a preferred
source on Google
static version load dynamic
Loading Comments
Comment on this article
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2026 02 > Anthropic snubs Pentagon push to open Claude AI for unmanned weapons systems or mass surveillance use
Daniel Zlatev, 2026-02-27 (Update: 2026-02-27)