Krux

March 9, 2026
Pentagon Labels Anthropic a Supply Chain Risk Over AI Guardrails
Published: March 9, 2026 at 12:31 AM
Updated: March 9, 2026 at 12:31 AM
100-word summary
The Department of Defense just designated Anthropic and its Claude AI a "supply chain risk" after the company refused to allow use for mass domestic surveillance and fully autonomous weapons. Anthropic wanted guardrails; DoD demanded broader "all lawful purposes" access. The March 4 designation bars defense contractors from using Claude on Pentagon work. OpenAI reportedly struck its own deal with looser terms, positioning itself to fill the gap. Anthropic plans to challenge the move in court, calling it legally unsound. It's the first time the Pentagon has tagged a domestic AI company this way, revealing a sharp divide over who controls the boundaries of military AI.
What happened
The Department of Defense just designated Anthropic and its Claude AI a "supply chain risk" after the company refused to allow use for mass domestic surveillance and fully autonomous weapons. Anthropic wanted guardrails; DoD demanded broader "all lawful purposes" access. The March 4 designation bars defense contractors from using Claude on Pentagon work. OpenAI reportedly struck its own deal with looser terms, positioning itself to fill the gap. Anthropic plans to challenge the move in court, calling it legally unsound.
Why it matters
It's the first time the Pentagon has tagged a domestic AI company this way, revealing a sharp divide over who controls the boundaries of military AI.