Agentic AI
,
Artificial Intelligence & Machine Learning
,
Governance & Risk Management
CISOs Seek Real Value as Vendors Tout the Latest Batch of AI-Driven Solutions

Artificial intelligence has touched nearly every facet of cybersecurity, at least in vendor narratives. From threat detection to identity management, AI is now embedded into almost every product pitch, including those aimed at zero trust implementation. As the conversation shifts from generative to agentic AI, it’s clear this emerging technology holds tremendous potential to ease zero trust fatigue, but only when guided by business context, quality data and human oversight.
See Also: Machine Identities Emerge as Critical Security Blind Spot
In response to the AI hype, security leaders are cautious. While they’re saying AI offers a “basket of opportunities, but they also see “vendor blind spots” and areas where “improvements are needed.”
Zero trust is not an easy framework to implement. Common organizational challenges include granular access control, least privilege enforcement and microsegmentation.
Early Stage Success
AI tools are already proving to be a helpful assistant in the early stages of zero trust, especially in assessments and policy planning. Rob LaMagna-Reiter, CISO at insurer WoodmenLife, pointed to the foundational role of data. “It all goes back to understanding and being honest with yourself about your current posture, identifying areas where AI can actually add value. AI relies quite heavily on data and quality data,” he said.
The early focus, LaMagna-Reiter said, is on enabling teams to make faster, data-driven decisions without disrupting business operations. But “manually jotting down your high-level, low-level diagrams and understanding the protective surface” is a prerequisite before plugging AI into the enforcement layer.
Another AI use case is to identify unusual anomalies in patterns. Billy Norwood, CISO at pharmaceutical wholesaler FFF Enterprises, is in the early stages of zero trust implementation and his team is using AI to identify unusual behavior patterns across humans and systems.
The low-hanging fruit of AI in zero trust today lies in accelerating manual processes. Agentic AI is being used for internal tasks such as enforcing identity lifecycle policies and minimizing dwell time.
Access entitlement reviews also have seen early gains, with AI helping to establish baselines and segment networks more intelligently.
Still, there is a long road ahead for AI, at least for zero trust implementations. “I rate it 4 out of 10, because right now we are not seeing it do anything more than simplifying the amount of data crunching. It is not actually helping or making decisions on our behalf,” Norwood said.
Beyond process acceleration, experts see AI as a tool to reduce both security team and end-user fatigue.
“Zero trust is a journey, not a one-time project,” said Bala Ramanan, director, risk and compliance at IT services firm Microland. “The end goal is an AI-enabled zero trust environment that can prevent breaches and elevate security posture automatically, without any human intervention.”
Amruta Gawde, director of cybersecurity at GE Aerospace, agrees. She sees a key role for AI in managing scale and improving user experience for identity and access management. “By employing AI-driven, just-in-time-access, we can reduce the number of accounts the users need to request or maintain without compromising security.”
Seeing the Blind Spots
While most vendors claim to have embedded AI in their solutions, including zero trust, these software products still have key gaps in securing enterprises. such as unmanaged devices and outside contractors.
For example, Norwood said many AI solutions fail with unmanaged contractor devices and rapidly changing personnel, as AI tools struggle without consistent visibility.
“Vendors could improve by offering policy-based profiling for such cases,” he said. By the same token, he said, SaaS platform vendors don’t provide granular insights into user activity and permissions. “Vendors need to enhance support for visibility into transient users and better monitor SaaS environments to truly deliver on the AI promise in zero trust. The models behind the scenes are so different in the quality of responses, AI systems can’t quite handle a lot of that ambiguity yet.”
Part of the problem is cutting through the vendor hype. Microland’s Ramanan said “high promises and under-delivering” is the major issue. “Most of them are still selling point solutions. Point solutions can help you to some extent. But will it catapult you forward in the journey of AI and zero trust? Not really,” he said.
For example, most vendors still work on static risk models, but technologies today need to identify anomalies and then take suitable actions. “We need technology to tackle dynamic risk scenarios. This is possible only if AI is part of everything we have and in everything we do.”
It’s time to start cutting back the hype, said George Finnney, CISO at The University of Texas System, adding that many products simply add LLMs for basic tasks, offering “little real advancements.”
As regulatory clarity improves and vendor capabilities mature, the ability of AI technology to elevate both security and experience will only grow.
“AI is both enemy and friend,” Ramanan said. “But remember one thing: The battlefields may change, but the battles won’t.”