Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
,
Standards, Regulations & Compliance
Lawmakers, Industry Warn Supply-Chain Risk Label Sets Dangerous Precedent for Tech

Technology groups representing major firms like Google, Apple and Microsoft – as well as a growing chorus of lawmakers, defense and artificial intelligence leaders – are beginning to line up behind AI developer Anthropic amid its escalating dispute with the Pentagon.
See Also: OnDemand | Fireside Chat: Staying Secure and Compliant Alongside AI Innovation
The Information Technology Industry Council, a lobbying group representing some of the largest tech firms in the world, sent a letter Wednesday to the Pentagon expressing opposition against Defense Secretary Pete Hegseth’s threats to label the U.S. firm a supply-chain risk, reported Reuters. The letter warns the designation could have cascading effects across the defense industrial base and set a far-reaching precedent for the entire technology sector.
“We are concerned by recent reports regarding the Department of War’s consideration of imposing a supply-chain risk designation in response to a procurement dispute,” the letter reads.
A group of 30 bipartisan defense and intelligence officials and tech policy leaders sent a letter Thursday to the Senate and House Armed Services committees urging Congress to investigate the Pentagon’s “attack” on Anthropic, saying “AI should not be used for mass domestic surveillance of American civilians” and adding that Anthropic’s reported red lines “are not fringe positions.”
“This action signals to every technology company – large and small – that government contracts come with the risk of existential retaliation,” the letter reads. “That is not a marketplace any serious entrepreneur or investor can build around.”
Critics of the Pentagon’s threats against Anthropic have argued in recent days that invoking supply-chain risk authorities against a domestic AI company would represent a sharp departure from how the designation has historically been used, which is primarily to address national security risks tied to foreign adversaries. Analysts have also warned that using the label as a punishment over a contract dispute surrounding AI safeguards could send a chilling signal across the tech sector at a moment when the U.S. is pushing to maintain an edge in the global AI development race (see: Hegseth’s Anthropic Deadline Risks Severe Defense AI Gaps).
Concerns over the administration’s growing dispute with Anthropic have spilled into Washington as lawmakers are starting to press leading AI firms about whether their tech can be used to enable domestic surveillance.
Sen. Ron Wyden, D-Ore., sent a letter Wednesday to the chief executives of Anthropic, Google, OpenAI and xAI requesting information on their policies surrounding government use of AI systems to analyze vast troves of Americans’ personal data without court approval.
Wyden said the Pentagon’s dispute with Anthropic “seems to be about whether or not the most advanced AI companies in the world will allow government customers to use their products to engage in practices that may be technically legal, but that violate privacy, undermine democracy or threaten human rights.”
It is unclear what comes next in the standoff between Anthropic and the Pentagon or whether the administration intends to follow through on threats to designate the company a supply-chain risk. The White House has ordered federal agencies to begin phasing out Anthropic’s technology over the next six months while shifting to alternative AI providers willing to operate under broader “all lawful use” language.
Analysts say unwinding the company’s technology from government systems may prove complicated in practice, particularly as defense contractors and data platforms integrate large language models and other AI tools into their workflows. Some have also raised questions about how the shift may affect companies like Palantir and other defense technology providers with platforms that have integrated multiple AI models across government and intelligence environments (see: Anthropic Fight Lays Bare How Fundamental AI Is to the DOD).
Officials have not publicly detailed how contractors would be expected to disentangle Anthropic systems from their broader software ecosystems if the administration ultimately proceeds with the designation. The White House did not respond to a request for comment on how the phase out would be implemented or whether negotiations with Anthropic are ongoing.
The uncertainty comes as some of Anthropic’s competitors have moved to secure their own government contracts, including OpenAI, which recently reached an agreement with the Pentagon allowing its models to be used for “all lawful purposes” – a deal CEO Sam Altman acknowledged in a public post appeared to have been rushed.
