Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
,
The Future of AI & Cybersecurity
Experts Say AI Is Already Enabling Faster and Harder-to-Detect Attack Campaigns

Network defenders are struggling to keep pace as new technologies are poised to give foreign adversaries and criminal hacking groups new advantages.
See Also: Agentic AI and the Future of Automated Threats
Artificial intelligence, quantum computing and cybersecurity leaders told the House Homeland Security Committee’s cyber and oversight subcommittees Wednesday that AI-enabled automation and advanced emerging tech capabilities are becoming persistent forces reshaping the threat landscape.
Legacy systems, outdated defenses and slow-moving security processes are being exposed to a new generation of attack vectors. Royal Hansen, Google’s vice president of privacy, safety and security engineering, said adversaries are already experimenting with AI-driven malware that can dynamically alter its behavior during execution, marking a shift toward more autonomous and adaptive attacks that are harder to detect and contain.
Google’s threat intelligence group has identified a shift over the past year in adversaries “experimenting with novel AI-enabled malware in active operations,” Hansen told the joint committee hearing. Researchers “have identified malware families that use LLMs to generate malicious scripts, obfuscate their own code to evade detection and use AI models to create malicious functions on demand, rather than hard-coding them into the malware.”
Threat actors have launched what researchers describe as the first significantly advanced, AI-enabled attack campaigns, using automated systems to scale spear-phishing, reconnaissance and adaptive malware beyond traditional human-driven operations. Industry reporting over the past year has documented a sharp rise in AI-powered exploits across cloud and SaaS environments that abuse identity systems and stolen credentials at machine speed, allowing attackers to bypass conventional defenses and evade detection with greater consistency.
Anthropic’s head of frontier red-teaming, Logan Graham, described how his team recently disrupted what it believes was the first documented case of a highly autonomous, AI-orchestrated cyberespionage campaign linked to a Chinese state-backed group (see: AI Tool Ran Bulk of Cyberattack, Anthropic Says).
Graham said frontier AI models were misused to automate large portions of reconnaissance, vulnerability scanning and exploitation across dozens of targets, effectively allowing a single human operator to direct the work equivalent of a coordinated hacking team (see: Event Horizon for Vibe Hacking Draws Closer, Anthropic Warns).
“A sophisticated, well-resourced threat actor – one willing to go to great lengths to circumvent AI model safeguards and deceive the AI model about its true intentions – can now extract meaningful operational value from frontier AI models,” Graham said.
While the campaign relied on known techniques rather than novel exploits, Graham said the use of AI dramatically increased the speed and scale of execution, speeding up timelines in ways that left defenders with less opportunity to detect lateral movement or stop data exfiltration before damage occurred. He warned the episode should be treated as an early indicator of how capable threat actors may seek to weaponize increasingly autonomous models despite safeguards.
As automation lowers the barriers to entry, experts told lawmakers that capabilities once limited to advanced nation-state actors are becoming accessible to a much wider pool of adversaries, including financially motivated criminal groups and ideologically driven actors. Michael Coates, a former chief information security officer and founding partner at Seven Hill Ventures, said AI advances are collapsing the cost, skill and time typically needed to conduct complex cyber operations.
Coates warned that the incident response time of organizations are measured in days or weeks, even as AI-enabled attacks unfold in hours or minutes. That mismatch, he said, disproportionately impacts smaller entities and critical services that lack large security teams or highly automated defenses, widening the gap between well-resourced attackers and overstretched defenders.
Quantum computing will introduce another source of potential instability in the years ahead, panelists said, particularly for systems built on cryptographic assumptions that may not hold over the next decade. Eddy Zervigon, CEO of Quantum XChange, warned that adversaries are already harvesting encrypted data today with the expectation it can be decrypted later once quantum capabilities mature.
“What happens when an algorithm breaks – because it is a when, not if,” Zervigon said. “Every agency CIO, enterprise CISO, security vendor and network gear manufacturer must be able to answer that question.”
Zervigon urged lawmakers to treat quantum preparedness as an immediate infrastructure and architectural challenge, arguing that simply swapping algorithms without addressing broader network design will leave critical systems exposed. Agencies that delay action, he warned, may risk compounding future damage by allowing sensitive data to accumulate beyond the point where it can be protected retroactively.
