Artificial Intelligence & Machine Learning
,
HIPAA/HITECH
,
Next-Generation Technologies & Secure Development
Missing: Threat Models to Defend Against Attacks in the Age of Agentic AI

Artificial intelligence is rapidly reshaping cybersecurity in unforeseen ways. How to best defend against it is a question still unanswered, warned panelists at the 35th annual Cryptographers’ Panel at RSAC Conference.
See Also: Agentic AI and the Future of Automated Threats
The rapid rise of AI agents stands as one of the most pivotal changes facing cyber defenders, Dawn Song, a professor at the University of California at Berkeley who co-directs its Center for Responsible Decentralized Intelligence, said during the Tuesday panel.
“These agents can now find zero-days and vulnerabilities in large-scale, open-source software,” she said. At the same time, such agents will be pivotal in helping to keep code development processes secure, with forecasters predicting that AI-enabled code development tools will generate up to 90% of all new code this year, Song said.
The panel, a fixture at the San Francisco event, highlighted the challenge of defending against AI-enabled attacks, discussed applying differential privacy approaches to safeguard the use of AI, how to implement cryptography inside deep neural networks and ongoing key management challenges – including for quantum computing.
For top challenges facing the industry, Adi Shamir, the “S” in the RSA cryptosystem, also pointed to “the explosive proliferation in agents,” adding that he’s “totally terrified by what’s going on.” Many such tools require extensive access to personal information, including files, calendars and more, and anecdotal evidence already abounds about how it can go wrong, including agents deleting treasured family photos, exposing private APIs or wiping production codebases.
Given such risks, “I think that the way you should think about agents is as very clever idiots,” he said.
The question of whether the cryptosystems that safeguard society are safe in the age of AI loomed large. “Does AI pose a threat to cryptography?” asked panel moderator Paul Kocher, the coauthor of the SSL/TLS protocol.
Cryptography “has been built on this idea of security being sort of anchored in hard mathematical problems,” whereas “with AI, the question of what we know and we don’t know is uncomfortably blurred in some ways,” he said.
Panelists said that whether AI will find previously undiscovered ways to break cryptosystems remains an open question. No tool has yet found a new vulnerability, instead only repeating what’s in available literature.
“While it’s very exciting and promising, I must emphasize there’s not been any cryptographic success made by AI,” said Shamir, a professor of computer science at Israel’s Weizmann Institute.
With large language models continuing to improve, Cynthia Dwork, a computer science professor of at Harvard University who’s one of the inventors of differential privacy and proof-of-work, urged cryptography researchers to share their findings with the teams using AI to try and break cryptosystems.
By doing this in advance of publishing – perhaps a few weeks ahead of time – she said they can help test the point at which AI can independently find previously unknown weaknesses in cryptographic systems. Panelists referenced the AI competition for addressing the protein folding problem, which led to significant new gains being discovered.
While AI hasn’t yet proven that it can find flaws in existing cryptographic systems, there are already a myriad of cybersecurity repercussions, “and I think that we don’t have a clue yet what the right threat model is that we should be defending against,” Dwork said.
Beyond AI’s potential impact on cryptography, panelists highlighted the rising risk posed by AI’s ability to rapidly synthesize data from many different sources, as well as to quickly put that data to work.
“I’ve been impressed by the aggressive and the ability of LLMs to actually personalize their attacks,” including “finding information about you in order to blackmail you,” Dwork said.
AI tools could also be used for “huge-scale traffic analysis” in ways that have never been seen before, and which might have surveillance applications with privacy repercussions, she said.
Shamir likewise highlighted the speed with which AI helps attackers automate and operate.
“Spear-phishing will become much easier and at scale. You will be able to exploit a one-day exploit that’s just been announced, within minutes of the announcement, before everyone has managed to download the patch. I can go on and on in telling you all the bad things that will happen to our security as a result of the localization of AI,” he said.
While the speed of attacks increases, defenses often can’t keep pace. The average time required to patch a system in a healthcare setting is 500 days, Song said.
“This business of taking time to download the patch strikes me as a very fundamental threat,” said Whitfield Diffie, who’s best known for the Diffie–Hellman key exchange.
What’s in store? A majority of the four panelists assessed that AI gives attackers the edge, at least for now.
Diffie differed in his assessment, saying that while AI makes it cheap to attack systems and find ways to exploit them, “this is equally available to attackers and defenders” – so long as defenders choose to embrace this path to more proactively lock down their infrastructure.
With AI’s impact on cybersecurity continuing to unfold, Shamir told the audience that he still sees upsides.
“To end this discussion on a positive note, there will be attacking agents, there will be defending agents and you can go to the beach,” he said.
