Artificial Intelligence & Machine Learning
,
Critical Infrastructure Security
,
Next-Generation Technologies & Secure Development
International Coalition Highlights Security Risks in OT’s Rush to AI

Hurriedly integrating artificial intelligence into industrial systems isn’t the wisest idea, the U.S. Cybersecurity and Infrastructure Security Agency and its domestic and international partners warned earlier this month.
See Also: Freeing Public Security and Networking Talent to do more with Automation
The agency published a set of high level principles and a non-exhaustive list of risks critical infrastructure owners should consider before deploying AI into operational technology or industrial control systems.
“We don’t want [operators] treating AI like a magical black box,” explained CISA’s top ICS Cybersecurity Advisor Matt Rogers. “It’s software, and you have to add it to your system with proper risk tolerances.”
CISA and its partners drew up the guidance amid a rapid scramble over the past 18 to 24 months by businesses, specifically OT technology vendors, to integrate AI into seemingly every product.
Initially, Rogers said, the principles were developed as a “sort of checklist of questions that you can ask your vendor” about their use of AI. The kind of checklist operators should have of questions to ask vendors about secure-by-design coding and development practices – and for the same reasons: Just like any other new software, AI represents an expanded attack surface and new risks that need to be minimized and managed.
Operators needed to be aware of how their vendors were employing AI, for example in the software development process, and of the associated supply chain risk issues, he explained. “We didn’t want [operators] accidentally installing AI and not realizing it because the vendor isn’t disclosing it,” or is doing so only in the small print, Rogers said.
Some commentators were underwhelmed by the lack of specificity and detailed practical advice. “The new principles offer a useful high-level roadmap,” wrote Brian Finch, a partner with Pillsbury Winthrop Shaw Pittman law firm. “But they stop short of answering the practical questions organizations must navigate: How to operationalize governance, evaluate vendors, allocate liability and incorporate AI considerations into long-standing safety and compliance frameworks.”
But the principles do foreshadow possible standards-based procurement or regulatory requirements, added Finch and his law firm colleague Austin Chegini. “AI-enabled OT products may face heightened vendor transparency, including model disclosures and safety reporting.”
In at least one corner of the OT marketplace, the AI cat is already out of the bag, certainly when it comes to software development, said Joseph Saunders, founder and CEO of RunSafe Security. His company, which provides security tools for embedded software, surveyed more than 200 professionals who work on embedded systems in critical infrastructure OT in the U.S., U.K. and Germany.
More than 83% had deployed AI-generated code to production systems. Almost as many – 80% – were currently using AI tools in software development.
The survey covered both vendors and operators, Rogers said, and both were using generative AI programs like Anthropic’s Claude.
Operators like Shell or Duke buy software from their vendors but also write their own. He pointed out. “The complexity of the supply chain in that software ecosystem only increases when you consider that any of those players along the process might be using AI in their software development,” said Saunders.
Not all vendors are equally good at disclosure about embedded AI, especially when they rush new functions into production, lest they miss out on the AI gold rush.
“Whenever you get this many new players in the market, not everybody is going to be focusing on cybersecurity,” Rogers said.
As CISA worked on the guidance, Rogers said, the agency began hearing from operators interested in expanding their use of AI. Some machine learning algorithms already support OT systems through predictive maintenance, grid load balancing or safety systems. “Operators are very comfortable with these” use-cases, Rogers said.
But “the buzz of the AI industry led to more [operators] adopting these machine learning type solutions,” he said. They weren’t new solutions, although they are evolving, “As people are now more aware of it, and it’s marketed differently to them, we wanted to make sure that they’re asking the right questions.”
Other operators wanted guidance because they were “curious” about possible use cases for GenAI large language models.
“This is traditionally a very risk-averse community, and so to have the conversation [about the principles document] be driven by operators asking for it, rather than this being a vendor-driven conversation, I found quite surprising,” Rogers said.
The final document provides a non-exhaustive list of 10 risks of introducing AI into OT systems ranging from the cybersecurity of the software itself to the lack of explainability, which can impede incident analysis; along with possible mitigations for each one.
The document also warns operators to understand the quality and value of their data. “We very frequently see operators undermining the value of their own data – outside of manufacturing, where, of course, people see the value of their intellectual property,” said Rogers.
In addition to CISA, the principles were endorsed by the NSA, FBI, Australian Signals Directorate’s Australian Cyber Security Center, the Canadian Center for Cyber Security, the German Federal Office for Information Security, the Netherlands National Cyber Security Center, the New Zealand National Cyber Security Center and the UK National Cyber Security Centre.
