Artificial Intelligence & Machine Learning
,
Events
,
Governance & Risk Management
Accountability Is Key as Enterprises Adopt AI at Scale, Says Saviynt’s Jim Routh
See Also: How Generative AI Enables Solo Cybercriminals
Artificial intelligence governance must balance innovation with security, making it vital that organizations adopt a flexible, consensus-driven approach to ensure responsible AI deployment while addressing risks such as data exposure and software resilience, said Jim Routh, chief trust officer at Saviynt.
With AI-generated code and software-as-a-service applications expanding the attack surface for enterprises, Routh adds that organizations must focus on securing software development and managing risks associated with generative AI.
“A result of using gen AI the way we are has tremendous opportunity for us to build quality software at a lower cost, which is frankly intoxicating, but at the end of the day, it comes back to that accountability. Software engineer or architect has to be accountable for the resilience of what they produce,” Routh said.
In this video interview with Information Security Media Group at RSAC Conference 2025, Routh also discussed:
- Ensuring accountability in AI development;
- Identity management in AI-driven systems;
- What is the most overlooked risk when deploying generative AI, and how a governance framework can mitigate it?
Routh is an accomplished digital and cybersecurity expert with extensive experience in business and technology. He currently serves on the boards of several organizations, including Supply Wisdom, GrammaTech, Savvy, Accountable Digital Identity Association and the Global Resiliency Federation. Additionally, he is a former board chair of the Health Information Sharing and Analysis Center and a former board member of the Financial Services Information Sharing and Analysis Center.

