WORLD NEWS

US Government Labels Anthropic AI a “Supply Chain Risk” Amid Pentagon Concerns

Pentagon warns Anthropic AI poses “unacceptable risk” to military supply chains over control of Claude model. Microsoft backs Anthropic in court.
2026-03-18
US Government Labels Anthropic AI a “Supply Chain Risk” Amid Pentagon Concerns

The U.S. government has classified artificial intelligence company Anthropic as a potential “supply chain risk,” citing concerns that its Claude AI model could pose “unacceptable risk” to military operations.

The designation comes amid growing scrutiny over AI in defense applications, particularly the use of Claude AI for military targeting in Iran and Anthropic’s refusal to allow its technology to power mass surveillance or fully autonomous lethal weapons systems.

In a legal filing to a California federal court, the Pentagon argued that Anthropic’s control over its AI model could allow the company to “disable its technology or preemptively alter the behavior of its model” during operations if its corporate “red lines” are crossed. The government emphasized that AI systems are “acutely vulnerable to manipulation,” and that this posed a direct threat to DoD supply chains.

Anthropic has challenged the designation, which, if upheld, would bar all U.S. government suppliers from doing business with the company. The “supply chain risk” classification is usually reserved for foreign adversary firms such as Huawei.

Microsoft, which uses Anthropic’s Claude model and supplies AI tech to the U.S. military, filed an amicus brief supporting Anthropic. “This is not the time to put at risk the very AI ecosystem that the administration has helped to champion,” the company said.

The dispute highlights growing tensions in balancing innovation, corporate control, and national security in the AI sector, as the U.S. military increasingly considers AI for intelligence and operational purposes.