Artificial Intelligence & Machine Learning
,
Government
,
Industry Specific
Agencies Prioritizing Tracking Use Over Enforcing Immediate Cutoffs

Federal staffers are still using Anthropic’s artificial intelligence models – despite President Donald Trump ordering agencies in late February to halt their use amid a feud between the Department of Defense and the company over its technology in military systems.
See Also: New Trend in Federal Cybersecurity: Streamlining Efficiency with a Holistic IT Approach
Trump issued the directive in a post on his social media platform, writing: “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.” He added that agencies using the firm’s products would have six months to phase them out.
Current and former federal employees tell ISMG the directive did not trigger any immediate or coordinated shutdown, and that internal communications in the weeks that followed focused more on gauging usage than enforcing a cutoff. Their accounts suggest that Trump’s push lags operational realities, particularly within civilian agencies where AI tools are already embedded in research, coding and analytical workflows.
Staffers from agencies including the departments of State and Treasury said teams were still using Anthropic’s popular Claude model, even as those agencies roll out official integrations with a version of OpenAI’s ChatGPT. Agencies are also aiming to test Anthropic’s Mythos system, an advanced model built to autonomously uncover and help fix software vulnerabilities, with Politico reporting the Department of Commerce’s Center for AI Standards and Innovation is already evaluating its capabilities.
The continued use of Claude inside civilian agencies contrasts the administration’s aggressive posture toward Anthropic, which has centered on concerns that the company retains too much control over how its models function once deployed in sensitive government environments. The Pentagon formally designated Anthropic a supply chain risk in early March, arguing that the company’s ability to update or restrict its models post-deployment could undermine the reliability of systems used in national security operations (see: Pentagon Memo Blasted Anthropic for PR Campaign).
A federal appeals court in Washington has allowed the Pentagon to move forward with removing Anthropic’s technology from military systems, even as parts of the policy face challenges in separate litigation, leaving the company effectively cut off from Pentagon work for now.
That posture has not translated cleanly across the civilian government, where officials are still working to understand how widely Anthropic tools are used – and what products, if any, could be used to replace them.
Staffers who spoke with ISMG said internal communications following Trump’s directive were aimed at establishing baseline visibility into usage – including which offices relied on Claude and for what types of work – rather than imposing immediate restrictions. Since then, the staffers said no follow-up communication surrounding a phase-out period have been formally relayed to teams still using Claude.
That approach reflects the practical challenge of unwinding AI adoption already underway across agencies, particularly as teams have already integrated specific AI tools into their drafting, coding, data analysis and other critical functions.
For now, the accounts from State and Treasury suggest that Anthropic’s tools remain significantly embedded in day-to-day workflows. The departments of State and Treasury did not respond to requests for comment. The White House did not respond to a request for comment.
