Pentagon Threatens to Drop Anthropic Over AI Safety Limits

February 19, 2026

Pentagon Threatens to Drop Anthropic Over AI Safety Limits

Published: February 19, 2026 at 1:03 AM

Updated: February 19, 2026 at 1:03 AM

100-word summary

The U.S. military reportedly used Anthropic's Claude AI during a Venezuela operation targeting Maduro, sparking a heated dispute over acceptable use cases. The Pentagon now wants all four major AI vendors (Anthropic, OpenAI, Google, xAI) to agree to "all lawful use cases" on classified networks. Anthropic is pushing back hard, demanding carve-outs that prohibit fully autonomous weapons and mass domestic surveillance. Despite launching Claude Gov in June 2025 with looser guardrails for intelligence work, the company refuses to sign the DoD's blanket authorization. By mid-February 2026, Pentagon officials began threatening to terminate or rework their $200 million contract. This standoff reveals a fundamental tension: can AI companies maintain safety principles...

What happened

The U.S. military reportedly used Anthropic's Claude AI during a Venezuela operation targeting Maduro, sparking a heated dispute over acceptable use cases. The Pentagon now wants all four major AI vendors (Anthropic, OpenAI, Google, xAI) to agree to "all lawful use cases" on classified networks. Anthropic is pushing back hard, demanding carve-outs that prohibit fully autonomous weapons and mass domestic surveillance. Despite launching Claude Gov in June 2025 with looser guardrails for intelligence work, the company refuses to sign the DoD's blanket authorization. By mid-February 2026, Pentagon officials began threatening to terminate or rework their $200 million contract.

Why it matters

This standoff reveals a fundamental tension: can AI companies maintain safety principles while serving national security?

Sources