Critical Infrastructure Security
,
Next-Generation Technologies & Secure Development
AI Fumbles, Not Hackers, Pose Next Shutdown Threat by 2028: Gartner

A misconfigured artificial intelligence system could do what hackers have tried and failed to accomplish: shut down an advanced economy’s critical infrastructure.
See Also: OnDemand: Customer Details Leaked: Attacker Abused Login API Flaw
Misconfigured AI embedded in a cyber-physical system will shut down national critical infrastructure in a G20 country by 2028, Gartner predicts.
Cyber-physical systems orchestrate sensing, computation, control, networking and analytics to interact with the physical world. The term encompasses operational technology, industrial control systems, industrial automation, the industrial Internet of Things, robots and drones. Unlike traditional software failures that might disrupt digital services, errors in AI-driven control systems can cascade into the physical world, potentially damaging equipment, forcing widespread shutdowns or destabilizing supply chains.
“The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal,” said Wam Voster, a Gartner vice president analyst.
The warning centers on scenarios where AI autonomously shuts down vital services, misinterprets sensor data or triggers unsafe actions. Modern power networks are an example: these systems increasingly rely on AI to balance electricity generation and consumption. A misconfigured predictive model could misinterpret demand fluctuations as instability, triggering unnecessary grid isolation or load shedding across entire regions or countries.
Voster said AI systems often resemble black boxes. “Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model,” Voster said. “The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed.”
Darren Guccione, CEO and co-founder of Keeper Security, said the prediction is grounded in current reality. AI systems are being embedded into power grids, transportation networks, healthcare platforms and financial services faster than governance, identity controls and configuration management frameworks are maturing, he told Information Security Media Group.
The most probable failure is misconfiguration amplified by automation and scale, Guccione said. AI systems depend on networks of privileged accounts, API keys, service identities, automation scripts and third-party integrations. When these identities are poorly governed, granted excessive permissions or inadequately monitored, they create systemic vulnerabilities.
Non-human identities present a particular challenge. Service accounts, automation tokens and AI agents now outnumber human users in many infrastructure environments. The identities typically operate with persistent privileges and limited oversight. A single flawed model deployment pipeline can trigger cascading failures across interconnected infrastructure systems.
“As automation expands, so does the blast radius of failure,” Guccione said. “AI does not eliminate risk – it accelerates it when guardrails are weak.”
