Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
,
Standards, Regulations & Compliance
DOD Official: AI Firm Wanted ‘Approval Role in the Operational Decision Chain’

Internal memos used to by the Department of Defense to justify its decision to blacklist artificial intelligence firm Anthropic said the firm’s models could not be reliably controlled for military use.
See Also: OnDemand | Fireside Chat: Staying Secure and Compliant Alongside AI Innovation
The documents, posted Friday in San Francisco federal district court, provide the most detailed explanation to date as to why the Pentagon designated Anthropic a “supply-chain risk.” The memos focus on the company’s refusal to support certain government uses of its technology – and its public fight with the department over those uses (see: Anthropic Fight Lays Bare How Fundamental AI Is to the DOD).
Defense officials argued that Anthropic – the only developer currently allowed in some of the military’s sensitive networks – retains the exclusive ability to modify, restrict or override how its models function once deployed in Pentagon environments.
“Anthropic’s ability to unilaterally alter system guardrails and model weights without [Department of War] consent could fundamentally change the system’s function and creates a significant operational risk,” wrote Emil Michael, a former Uber executive who is now undersecretary of defense for research and engineering.
Michael also accused the AI mainstay in a March memo of using “ongoing good faith negotiations for Anthropic’s own public relations.”
“A vendor that raises the prospect of disallowing its software to function in critical military operations, and treats its negotiations the DoW primarily as tools for brand-building cannot be trusted, particularly when that marketing campaign is openly hostile to the DoW and duplicitous,” he wrote.
Michael acknowledged that the military accepted a baseline level of risk by ushering in an AI system to its network and accepting Anthropic’s role as the software’s maintainer. That risk became intolerable when “Anthropic asserted in the negotiations that it have an approval role in the operational decision chain,” he said.
That, combined with what Michael terms a hostile public posture “represents a fully mature supply-chain risk – including increased potential for model poisoning, insider threat risk, data exfiltration and denial of service – posing a direct, intolerable and material risk to our warfighting capability which warrants the designation of Anthropic as a supply-chain risk.”
A third-party assessment conducted by Exiger Diligence and commissioned by the Pentagon rated Anthropic’s overall risk as “medium” across cyber, operational and compliance categories.
Anthropic’s dispute with the government is unfolding across multiple courts. A federal appeals court in Washington recently declined to block the supply-chain risk designation, allowing Defense to continue enforcing the blacklist (see: Court Backs Pentagon Anthropic Ban, but Fight Continues).
A separate federal judge in California has taken a more limited approach, granting Anthropic partial relief that constrains how broadly the government can apply the designation.
Anthropic has argued in court filings that the designation is factually and procedurally flawed. The designation threatens hundreds of millions of dollars in near-term federal revenue and potentially far more if the policy expands across government and commercial partners.
