AI-Developed Attack Tooling Generated ‘High-Volume, Noisy Workflows’

An unidentified hacker used Claude and Chat GPT in a cyberattack against a municipal water and sewage utility’s operational technology systems in Mexico in January, according to forensic analysis by OT security firm Dragos.
See Also: AI Impersonation Is the New Arms Race-Is Your Workforce Ready?
The generative AI tools helped the attacker with identifying a possible gateway to the utility’s OT systems, highlighting its significance as a “crown jewel” asset, and designing an ultimately unsuccessful effort to penetrate it, explained report author, Dragos Associate Principal Adversary Hunter Jay Deen.
The AI-tooling Dragos analyzed “leveraged known techniques and existing vulnerability knowledge to enumerate systems and services and attempt exploitation,” Deen told ISMG.
Servicios de Agua y Drenaje de Monterrey was one of nine government entities in Mexico breached by the attacker between December 2025 and February 2026. The campaign was first reported last month by threat intelligence researchers at Gambit Security, based on a trove of digital artifacts they recovered from several virtual servers used by the attacker – a rare real-world example of the much-feared but often over-hyped AI-powered cyberattack campaign.
This is the first time OT security specialists have examined evidence demonstrating in detail both the possibilities and the limitations of AI-assisted hacking against OT.
Significantly, Dragos researchers concluded that the attacker seemed focused on data theft until Claude found an OT interface on the utility’s network, and singled it out as a possible target, Deen said.
“The adversary showed no sign of intent to target or disrupt OT prior to Claude identifying OT infrastructure within the [network] environment,” Deen said. The infrastructure was a vNode industrial gateway – a management interface for web-based monitoring and control of industrial processes. The gateway serves as a data integration layer between OT systems and enterprise IT environments.
Once Claude highlighted the vNode as “a high-value critical asset,” the attacker instructed it to go ahead with assessment and targeting activities. Claude devised an unsuccessful password spray attack, and after it failed, the attacker went back to looking for data to steal, eventually gaining access to more than 8,000 procurement, vendor and bidding records.
Notably, the password spray attack failed even though it used a specially compiled credential list that combined default credentials, victim and environment-specific naming conventions, and reused credentials harvested during the broader set of attacks against other government systems in the province. That suggests good password hygiene on the targeted system. Moreover, even a successful attack would not necessarily have given the attacker access to the OT system, the report notes, if the vNode was properly set up.
“Common vNode deployment use cases feature a ‘store & forward’ architecture,” in which the OT interface communicates with the IT network only through a segmented “de-militarized zone,” states the report.
Experts said the findings underlined the effectiveness of basic security controls and maintaining good cyber hygiene, even against attackers with the latest AI tools.
“The encouraging takeaway is … the value of layered defenses and sound engineering practices,” said Marcus Sachs, senior vice president and chief engineer at the Center for Internet Security.
Organizations needed to see past marketing hype, he added. They “do not need advanced AI-enabled defenses to meaningfully reduce risk. What we often describe as ‘reasonable security’ or consistent application of well-established safeguards, remains highly effective even as adversaries adopt more advanced tools.”
“The challenge now is to ensure those protections are consistently applied across the thousands of utilities that make up the nation’s critical infrastructure,” Sachs said.
Dragos researchers concluded the OpenAI and Anthropic tools didn’t provide any novel capabilities, but enabled an attacker without any OT-specific skills and experience who had breached the enterprise IT system, to identify and attack OT systems, and dramatically compressed the timeline from IT intrusion to OT attack.
“AI supported rapid environmental analysis, identification of an OT-adjacent environment, development and refinement of intrusion tooling, and generation of a viable access path towards the IT-OT boundary using known techniques and publicly available tradecraft,” states the report.
“The broader takeaway is less about autonomous AI-driven attacks and more about how AI-assisted workflows can accelerate an adversary’s understanding of environments and improve visibility into OT-adjacent networks,” Deen added.
Dragos said it released the reporting to help soothe public response to AI-enabled hacking, which has so far been driven by often groundless fears about autonomous cyberattack campaigns.
Their analysis, and Gambit Security’s previous reporting-shows that, Claude and Chat GPT were in this case sometimes unwilling tools that helped the attacker automate certain steps in the attack chain. The AI models provided tooling which they were able to iteratively refine as they gained more knowledge of the environment.
But Dragos also found that the AI-developed tooling wasn’t very good and would likely only succeed in the absence of basic security measures: “Its operational use would likely generate high-volume, noisy workflows in which only a subset of functions would succeed when exposed assets or weak security controls were present,” states the report.
