AI in OT May Trigger Cascading Infrastructure Failures

The U.S. cyber defense agency warned that machine learning and large language model deployments can introduce new attack surfaces across critical infrastructure sectors in a document setting out principles for safely integrating artificial intelligence into operational technology.
See Also: Going Beyond the Copilot Pilot – A CISO’s Perspective
The Cybersecurity and Infrastructure Security Agency and international partners advise critical infrastructure operators to have a deep understanding of how AI models behave, how they fail and how those failures can potentially cascade before implementing them in technology that manages energy, manufacturing, water, transportation and other services.
The report says operators should assess whether AI is even appropriate for the proposed use case. The technology’s complexity, costs and opacity can at times outweigh its benefits in some industrial environments. AI deployments often expand the attack surface through increased connectivity, cloud dependencies and third-party, vendor-managed components that introduce visibility gaps for critical infrastructure owners and operators.
The guidance details unique risks that come from integrating machine learning or large language models into industrial control systems, including model drift, poor training data quality, unexplained decision-making and operator overload when AI produces noisy or incorrect alerts. AI-driven flaws can also reduce system availability, reduce functional security and create conditions for adversaries to manipulate outputs, according to the report.
The guidance comes as recent high-profile attacks on OT suggest Chinese hackers and other advanced threat actors are positioning themselves for disruptive or destructive attacks on critical infrastructure by exploiting weak remote access pathways and living off the land inside industrial networks (see: Chinese Hackers Exploit Unpatched Servers in Taiwan ).
The agencies say critical infrastructure operators must strengthen data governance frameworks before pursuing any AI initiatives, given the sensitivity of engineering diagrams, process measurements and other OT data used to train and refine models. The report advises operators to enforce strict access controls, prevent vendors from repurposing operational data for model training and to always confirm that data stored off premises remains secure.
The guidance aims to address the growing trend of AI-enabled industrial devices, with vendors increasingly embedding predictive and decision-support capabilities directly into controllers and supervisory systems. Operators should obligate technology vendors to contractually disclose any embedded AI features and allow operators to disable or limit AI functions, the agencies say.
The guidance calls for formal governance frameworks that define clear roles and responsibilities across leadership, cybersecurity teams, OT engineers and AI specialists. Operators should embed AI oversight into existing risk programs, conduct continuous audits and validate that systems comply with sector-specific safety and regulatory requirements, the report says.
