Agentic AI
,
AI-Based Attacks
,
Cybercrime
A Disorienting Future: Rapid Pace of Change and AI Agents in the Hands of Attackers

Reflecting the current state of cybersecurity, uncertainty dominated at this year’s annual RSAC Conference in San Francisco.
See Also: AI Impersonation Is the New Arms Race—Is Your Workforce Ready?
The theme of this year’s event – “the power of community” – sounded a hopeful note for the role of people over tools and technology. But advances in artificial intelligence, including agentic AI, are posing risks experts weren’t forecasting even 12 months ago.
This is, to put it mildly, a disorientating state of affairs for all involved, especially in a world that once enjoyed more predictability, whether through Moore’s Law or the mathematical underpinnings of cryptography.
“We’re at a point now where I have no idea what the world’s going to look like in two or three years, and trying to predict what bets need to be made now on technology to be able to defend against these uncertain threats” remains an open question, Paul Kocher, an independent cryptography and data security researcher who co-authored the SSL/TLS protocol, said in an interview at the conference (see: Why Cybersecurity’s Uncertainty Problem Is Getting Worse).
He asked: “Will AI be able to find zero-days at a rate that we can’t defend against them with our current techniques?” While cryptographic systems might stay secure, could wraparounds – such as key management – fall in unforeseen ways to AI-driven attacks? What impact will “dramatic cuts” in the U.S. government’s research funding have on cybersecurity and cryptography? Will quantum computing hardware advances arrive more quickly than expected, or perhaps face fundamental limits that dramatically curtail their computing power?
“So, I really don’t have a good insight as to what things are going to look like, even just 1,000 days from now,” Kocher said.
More than a little cause for concern comes from the rise of agentic AI.
Speaking during the RSAC Cryptographers’ Panel that Kocher moderated on March 24, Adi Shamir, the “S” in the RSA cryptosystem, pointed to “the explosive proliferation in agents,” adding that he’s “totally terrified by what’s going on,” not least because of how much data these agents might be able to access. He urged caution (see: RSAC Cryptographers’ Panel Highlights AI Defense Challenges).
For practitioners, one repeat refrain at the conference seemed to be: Embrace AI, but with care. “Adopt AI fast, but not too fast, because I think it can lead to uncontrolled usage in your organization,” said Pieter Danhieux, CEO of Secure Code Warrior, riffing on JP Morgan Chase CISO Pat Opet’s pivotal 2025 open letter to suppliers.
“So, adopt it with the right speed, but make sure you’re doing it in a way that is controlled,” Danhieux told me.
Keeping control is proving to be a challenge.
“It’s going to get much worse. Just look at generated code, right? I mean, pull requests are getting bigger. The vulnerability mix is changing. It’s not going down. How do we deal with that? How do we let people safely generate code from prompts?” said Daniel Kennedy, principal research analyst at 451 Research, part of S&P Global Market Intelligence.
As a solution, he offered a brake metaphor. “A lot of people think brakes are for stopping cars. Brakes allow you to operate faster, and so this entire AI governance field that’s developing is going to allow us to safely operate AI in all its forms and draw the benefits from it, and that’s really what the entire show floor is about,” he said.
Referencing Peter Parker – “with great power comes great responsibility” – Devon Bryan, global CSO for online travel giant Booking Holdings, likewise emphasized the importance of governance and having a human in the loop. The industry needs to safeguard the power granted to AI agents as organizations pursue the “machine speed” benefits they offer. “It’s absolutely necessary to have guardrails in place around what that agent is allowed to do,” but “when it comes to critical decision-making and the exercising of judgment, that’s where we need that carbon-based life form in the loop for those kinds of situations,” Bryan said.
Uncertainty also comes in the form of the latest attack techniques, sometimes thanks to large language models. In fact, NightDragon CEO Dave DeWalt asserts that cybersecurity has entered a “dark period” in which attackers will be using AI tools to greater advantage than defenders for the foreseeable future.
“I’ve been impressed by the aggressiveness and the ability of LLMs to actually personalize their attacks,” said Cynthia Dwork, a computer science professor at Harvard University who’s one of the inventors of differential privacy and proof-of-work, on the Cryptographers’ Panel.
AI tools are now helping cybercriminals in “finding information about you in order to blackmail you,” she said.
