Analysts Warn Pentagon Feud With Anthropic Could Trigger Cascading Defense Impacts

Defense Secretary Pete Hegseth’s reported Friday deadline for Anthropic to provide expanded access to its Claude artificial intelligence model is unrealistic and risks triggering unintended cybersecurity and supply chain consequences across the defense industrial base, analysts told Information Security Media Group.
See Also: Why HSMs Are Critical to Digital Asset Security
The Pentagon is reviewing its business ties with Anthropic following weeks of negotiations over how Claude may be used in classified and operational environments (see: Pentagon Claude Dispute Fuels Mismatch Over AI ). Hegseth has reportedly warned the company it could face designation as a “supply chain risk” or other punitive measures if it refuses to tailor model access and safeguards to military demands.
But reducing such a technically, legally and morally complex dispute to a matter of days overlooks the operational realities on both sides, experts told ISMG, and risks triggering significant downstream consequences across the DIB at a moment when AI experimentation is accelerating to match foreign adversaries. The Friday ultimatum is “unrealistic” and a potential supply chain risk designation would push the department into “uncharted territory,” said Kevin Greene, former program manager in the Department of Homeland Security’s Science and Technology Directorate.
“If the Pentagon walks away from Anthropic, it will cause at least six months to a year capability gap while waiting for competitors to achieve the same level of integration and mission capabilities,” said Greene, who now serves as the public sector chief technologist for BeyondTrust. He added that labeling the firm a supply chain risk “may also limit the choices the department may have as a viable replacement, not just for use at the Pentagon, but in other areas where the models were used to support mission capabilities.”
The center of the conflict appears to be concern over potential “mass surveillance” mission use – Anthropic has reportedly sought to maintain restrictions on certain surveillance and weapons-related applications, while Defense Department officials are pushing for broader flexibility across intelligence and cyber operations. Analysts said one possible path forward could involve codified legal agreements in which the Pentagon formally commits that Anthropic’s models will not be used for facial recognition or tracking of U.S. citizens domestically.
Those potential provisions could create carve outs for foreign intelligence surveillance authorities targeting known threat actors outside the United States, providing clearer operational boundaries without dismantling core safeguards. But the reported dispute also involves defining and enforcing guardrails for autonomous or agentic AI systems operating inside sensitive networks, with Anthropic arguing that human oversight requirements are essential security controls to ensure AI-driven actions are validated before execution.
Christopher Caen, CEO of the AI infrastructure firm Mill Pond Research, said the confrontation exposes deeper structural tensions between commercial AI development and defense mission requirements.
“It highlights the urgent necessity for defense agencies to control the orchestration architecture, allowing them to define their own security and operational parameters independent of a commercial provider’s shifting policies,” Caen told ISMG. Rather than pressuring vendors to alter core safeguards, Caen added that controlling the orchestration architecture surrounding the model would allow agencies to define their own security and operational parameters independent of a commercial provider’s “evolving terms of service or safety posture.”
Hegseth’s reported threat to invoke authorities such as the Defense Production Act adds another layer to the evolving dispute. While the statute grants the executive branch broad authorities to prioritize contracts deemed necessary for national defense, analysts warned that its use in the AI domain could blur lines between responsible governance and compelled capability expansion.
“A more sustainable path involves adopting model-agnostic infrastructure that allows agencies to deploy open-source or proprietary models within their own secure perimeters,” Caen told ISMG, adding that the future of AI in defense “lies in decoupled architectures where the government controls the security policy and orchestration layer.”
The Pentagon and White House did not respond to multiple requests for comment.
