Artificial Intelligence & Machine Learning
,
Litigation
,
Next-Generation Technologies & Secure Development
Temporary Ruling Preserves Pentagon’s AI Access as Courts Weigh AI Guardrail Limits

A federal judge’s decision to temporarily block the Trump administration from blacklisting the artificial intelligence firm Anthropic allows federal agencies and the Pentagon to continue using and evaluating its technology in the near term – but leaves unresolved whether the White House can force AI providers to loosen safety guardrails as a condition of doing business with the federal government.
See Also: OnDemand | Fireside Chat: Staying Secure and Compliant Alongside AI Innovation
A Thursday ruling temporarily restricts the government from labeling Anthropic a “supply-chain risk,” a designation that would have effectively cut the company out of federal procurement pipelines after it refused to expand how its models could be used by military and intelligence agencies (see: Pentagon Warns Anthropic Could ‘Subvert’ Defense AI Systems ).
U.S. District for the Northern District of California Judge Rita Lin’s preliminary injunction preserves the status quo for now, giving agencies continued access to Anthropic’s systems as the courts continue to weigh whether the administration’s actions were lawful.
“These broad measures do not appear to be directed at the government’s stated national security interests,” Lin wrote of administration efforts to have federal agencies eschew Anthropic. “If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic.”
A designation of Anthropic as a supply-chain risk “is likely both contrary to law and arbitrary and capricious,” she also wrote. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
The blacklisting came after Anthropic asserted that it would not remove restrictions prohibiting the U.S. military from using its AI products in autonomous weapons or mass domestic surveillance, arguing those limits are central to its safety frameworks and corporate governance. The administration has taken the opposite view and argued that defense industrial base vendors should not have the ability to retain unilateral control over system behavior in mission-critical contexts where operational flexibility may be required.
Reported disagreements between Pentagon officials and Anthropic came to a head earlier this year when Secretary of Defense Pete Hegseth moved to designate Anthropic as a national security risk, a label typically associated with foreign adversaries and one that would have effectively barred agencies and contractors from using its technology. Anthropic argued in court filings that the designation was not grounded in genuine supply-chain concerns but was actually retaliation for the company’s public stance on AI safety and refusal to comply with the government’s demands.
The lawsuit alleges violations of the Administrative Procedure Act as well as the First and Fifth Amendments, claiming the government exceeded its authority and denied the company due process. Judge Lin’s Thursday decision suggests that those arguments hold weight.
The injunction temporarily shields Anthropic from the immediate fallout of being labeled a security risk, including the potential loss of new customers and federal partnerships. But the decision doesn’t resolve the underlying policy dispute, as federal agencies are increasingly exploring how to integrate AI models into operational workflows (see: Anthropic Fight Lays Bare How Fundamental AI Is to the DOD).
Anthropic filed the complaint earlier this month, seeking both injunctive relief and a permanent reversal of the designation. The company argued the government failed to follow required rulemaking procedures and did not provide sufficient evidence to justify placing it on a restricted list typically reserved for entities linked to foreign ownership.
Anthropic said the designation would have triggered cascading consequences, from the termination of existing agreements and its exclusion from future defense procurements, to potential spillover into private-sector procurement decisions. Government attorneys argued in response that the designation falls within the executive branch’s national security authorities and procurement discretion, while warning that allowing vendors to impose their own binding restrictions on AI systems could create operational vulnerabilities in defense environments.
