3rd Party Risk Management
,
Agentic AI
,
Artificial Intelligence & Machine Learning
Keyrock CISO David Cass on Managing Agentic AI Risk in Financial Services
Financial institutions have long operated under the principles of safety and soundness but as agentic artificial intelligence moves into production environments, those principles are being tested in ways regulators and security teams weren’t prepared for.
See Also: How Continuous Compromise Assessment Is Changing SecOps Strategy
“You can outsource anything as a business decision, but at the end of the day, you own the risk,” said David Cass, CISO at Keyrock and adjunct faculty at Harvard Extension School.
Cass said organizations must treat AI governance as a live, ongoing function, not as a committee that meets once a year. After all, “you can’t blame the AI from a regulatory point of view,” he said.
But tracking how AI is embedded across systems and vendors is difficult, pushing organizations to move toward attribute-based access control to limit the blast radius of a compromise, Cass said.
In this video interview with Information Security Media Group at RSAC Conference 2026, Cass also discussed:
- Why asset inventory must now include third-party embedded AI and the libraries those systems share;
- How trust and transparency define what CISOs should demand from AI security startups;
- Why regulations will always lag deployment and why safety and soundness principles must fill the gap.
Cass has more than 20 years of experience in risk management, incident response, information security and disaster recovery. He previously held the CISO role at IBM, Elsevier and GSR. He is also the president of CISOs Connect, where he leads the company’s peer engagement efforts, as well as an adjunct faculty for master’s degree in cybersecurity at Harvard Extension School.

