Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
AI Shutdown Risk Exposes Governance Gaps and Vendor Dependency Concerns

The federal government’s recent decision to designate Anthropic, maker of the Claude artificial intelligence platform, as a “supply-chain risk” should raise alarm bells for technology leaders who are tasked with embedding AI systems across the enterprise. Going all-in with a single AI vendor can be risky.
See Also: AI Is Transforming the Chief Data Officer Role
“This standoff between the federal government and Anthropic is really an interesting look at risk management unfolding in real time,” said Alla Valente, principal analyst at Forrester. “Usually risk management doesn’t play out this quickly and this publicly, so this gives us a very interesting and rare glimpse into those insights.”
Whether it’s due to regulatory action, litigation, outage, vendor collapse, security concerns or sudden policy change, an AI model can become suddenly unavailable, leaving dependent enterprises in a lurch. And many aren’t prepared.
CIOs need to evaluate vendor risk, but also start thinking about how they will govern the new authority given to AI systems as they expand throughout the enterprise. “Every CIO should be able to answer a simple question: ‘If we turned this AI system off tomorrow, what would break?'” said Puneet Bhatnagar, an independent AI and identity security expert who most recently led identity and access management at Blackstone.
Answering that question can be complicated. The problem is exacerbated by the ways in which AI systems are more than just products. They’re often distributed throughout an organization and interact with and on behalf of employees.
“It’s not just a vendor that is cut off overnight. It’s a loss of delegated authority,” Bhatnagar said. “And AI based infrastructure is acting as the authority, often on behalf of humans.”
The goal of risk management, Valente said, isn’t to eliminate risk, but to manage it in a way that allows a business to thrive. “It’s a misconception that we manage risk to eliminate all risk. If companies weren’t taking risks they wouldn’t be investing in AI,” Valente said. “We can’t grow, we can’t innovate, we can’t do anything new if we’re not taking risks.”
She said that organizations need to evaluate multiple dimensions of risk when it comes to AI models, especially when preparing to replace or remove one: legal or regulatory risk, technical risk and operational risk.
Companies should evaluate legal and regulatory risks to determine how restrictions apply across the enterprise, and which contracts, agencies and systems are affected.
From a technical perspective, organizations need to figure out how they’ll respond if an AI model suddenly becomes unavailable or unsupportable, and determine if they can rip it out wholesale or if it could be removed in parts from systems and infrastructure.
Tech leaders also need to map out which processes and workflows will fail if an AI model disappears. “If we revoke this AI’s access today, what business processes stop immediately?” Bhatnagar said. Find out what will break, in what order things will fail and what the cost could be.
“You need to map out all of the use cases, all of the systems, all of the workflows and all of the decision-making,” Valente said. “You don’t just rip and replace. There is no big red easy button.”
The deeper issue exposed by the Pentagon’s dispute with Anthropic is that AI systems are no longer just tools that assist workers. They access data, trigger actions and influence decisions, but governance models are still built for traditional software and human users. And technology teams may not have a full accounting of what an AI system is connected to, allowed to do, why it does it and how they should monitor for abnormal behavior.
“We spent years putting strong controls around human access, but we don’t yet have those controls for AI,” Bhatnagar said. “AI agents are almost like a cross between humans and machines – they have human-like intelligence but machine-like speed and impact.”
How these new agents are governed is a problem CIOs and CISOs must solve together, he said. Identity and access management controls are a good place to start, as well as introducing governance around intent and behavioral context.
“You need to start establishing who has access to what through those AI systems, and you need to know what access those AI systems have,” Bhatnagar said. “Identity and access management has the potential to become the AI kill switch.”
Valente draws a line between governance and risk management. “For the last year and a half, everyone’s been talking about AI governance. But they haven’t yet started talking about AI risk. And governance and risk are not the same thing. Not in the slightest,” she said.
Risk, meanwhile, is exacerbated by the small number of dominant AI vendors, and enterprises are replying an old lesson from cloud and supply chains: Efficiency drives concentration, and concentration increases systemic risk. A single-model strategy may be efficient in the short term, but CIOs need redundancy, alternatives and tested off-ramps.
“There aren’t a whole lot of options, and that exposes companies to much higher concentration risk,” she said. “Hyper-efficiency is the enemy of resilience.”
Technology leaders should be building visibility and contingency plans now, before they are thrown into crisis if an AI model goes down. Teams should map AI dependencies, use cases, systems, workflows and decision-making, and should diversify AI vendors and treat AI systems like critical infrastructure.
“This type of risk analysis and scenario planning needs to start immediately,” Valente said. “There is no one easy answer. The right path will be different for every company.”
Strengthening vendor due diligence is also a must. “The contract is one of your greatest risk management tools, but very few organizations are using it that way,” Valente said. “Any control you have is before the contract is signed.”
