Agentic AI
,
Artificial Intelligence & Machine Learning
,
Litigation
New Filing Frames Anthropic Dispute as Operational Control Issue – Not Free Speech

The Trump administration is defending its decision to cut off Anthropic from federal systems by arguing the company’s control over its artificial intelligence models created a risk that AI tools could be disabled, degraded or even manipulated within sensitive military environments.
See Also: How Unstructured Data Chaos Undermines AI Success
Department of Justice attorneys said in rejoinder filed in San Francisco federal court that the dispute is not about punishing Anthropic for its views on AI safety and ethics, but about preventing a vendor from retaining the ability to alter or disrupt mission-critical systems after deployment.
The filing, submitted in response to Anthropic’s request for a preliminary injunction blocking its designation as a supply-chain risk, reframes a public debate over free speech and AI ethics as a more technical dispute over supply-chain risk and operational control (see: Anthropic Seeks Court Stay of Pentagon Risk Designation ).
The filing says there was a “significant risk” that Anthropic could “subvert the design and/or functionality” of its products through ongoing updates, adding that this capability “introduces national security risks to the DoW’s supply chain,” referring to the Department of Defense as an acronym for the Department of War, the Trump administration’s preferred moniker.
The government also argues that – unlike traditional software – large language models require continuous tuning and are inherently dependent on the vendor’s integrity, which can create a dynamic where the developer retains influence over how systems behave long after their deployment.
“AI systems are acutely vulnerable to manipulation” by those with privileged access, the filing states, warning that a vendor could “introduce unwanted function, or otherwise subvert the design, integrity and operation of the model.” The filing also said Anthropic’s ability to change “system guardrails and model weights without DoW consent could fundamentally change the system’s function,” including cases in which “a critical defense system failing to engage due to an unapproved, vendor-side modification.”
The filing raises the possibility of more direct disruption, arguing there is a “substantial risk” the company could “disable its technology or preemptively and surreptitiously alter the behavior of the model” during active operations, with potentially severe national security consequences. Those concerns form the backbone of the administration’s justifications for both the president’s directive ordering agencies to phase out Anthropic’s technology, and the Department of Defense’s formal designation of the company as a supply-chain risk (see: Trump Escalates AI Clash With Anthropic).
Under the federal government’s “supply-chain risk” designation, Anthropic is effectively barred from supplying AI systems used in national security environments, forcing agencies to transition off its models over a 180-day period. The government tied that ultimatum directly to the firm’s refusal to accept a contractual provision allowing the Pentagon to use its AI systems for “any lawful purpose,” a requirement Defense officials now say is essential to maintaining operational authority.
Anthropic declined to adopt the term, citing internal policies that restrict certain uses of its technology, including applications tied to surveillance and weapons development (see: Anthropic Fight Lays Bare How Fundamental AI Is to the DOD ).
In response to Anthropic’s request for a preliminary injunction, the government argued that the firm’s refusal is not protected speech but commercial conduct, and that the resulting fallout – much of which played out in the court of public opinion – actually reflected a breakdown in vendor negotiations rather than retaliation for the company’s views.
“It was only when Anthropic refused to release the restrictions on the use of its products … that the President directed all federal agencies to terminate their business relationships,” the filing reads, arguing that “no one has purported to restrict Anthropic’s expressive activity.” The court filing also points to legal precedent that gives the federal government broad discretion to decide which companies it contracts with – particularly in areas tied to national security and defense procurement.
