Agentic AI
,
Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Guidance Warns Autonomous Systems Expand Enterprise Exposure

Federal cybersecurity officials and international partners are warning that the rapid adoption of autonomous agentic artificial intelligence systems is introducing a new class of security risks that could outpace existing defenses if left unchecked.
See Also: AI Agents Introduce a New Insider Threat Model
The Cybersecurity and Infrastructure Security Agency, the National Security Agency and allied cyber agencies across the U.K., Canada, Australia and New Zealand published new joint guidance outlining how agentic AI systems – which can independently plan, reason and take action across enterprise environments – are already expanding attack surfaces, complicating oversight and amplifying the consequences of security failures. Because agentic AI systems rely heavily on external tools, APIs and third-party components, the guidance warns that each integration point introduces additional exposure – and potential security threats.
The guidance comes as government agencies and critical infrastructure operators increasingly experiment with agent-based AI to automate operational workflows, from IT management to procurement and customer support, with increasingly minimal human intervention. Analysts told ISMG that AI agents can significantly amply identity risks, and that human oversight remains critical to ensure autonomous actions stay aligned with policy, intent and operational reality in higher-risk workflows.
“When a risk arises, it’s critical for responders to have operational visibility to understand what the agent did, why it did it and what the user intended,” said John Harmon, regional vice president of cyber solutions at Elastic and former global network analyst for the National Security Agency. Harmon told ISMG integrating agentic AI requires “an evolution in security,” including tighter access boundaries, better telemetry and controls that evaluate actions in context.
“Federal organizations that have stronger visibility into agent behavior will better understand what the agent did, what tools it used, what it was asked to do and what happened as a result,” he added.
Unlike traditional generative AI systems that produce outputs for human review, agentic AI systems are designed to execute tasks autonomously by integrating large language models with external tools, data sources and system-level permissions. The guidance notes agentic AI systems “can automate repetitive, well-defined and low-risk tasks” but warns that those same capabilities can introduce “productivity losses, service disruption, privacy breaches or cyber security incidents” if not properly secured.
One of the biggest risks organizations currently face in adopting agentic AI is the lack of visibility into Agentic workflows and intent at runtime, according to Elad Schulman, CEO and co-founder of the GenAI security platform Lasso Security.
“We don’t have visibility into how agent workflows evolve over time, and whether the underlying intent stays consistent,” Schulman told ISMG. “Without visibility into intent and the ability to detect deviations from that baseline, organizations are effectively blind to the most critical risks in autonomous systems.”
The guidance also details identity risks unique to agentic environments, where attackers can impersonate agents or steal credentials to operate within trusted workflows. Because many monitoring systems are tuned to detect anomalous behavior rather than identity misuse, those attacks can evade detection until after damage is done.
Experts told ISMG that governance should move from static review to continuous oversight with built-in guardrails and rollback capabilities. Agentic AI systems require continuous, contextual governance where identity, access and behavior are evaluated in real time, according to Travis Rosiek, public sector chief technology officer for Rubrik.
“Agentic AI acts like a deputy with full system access, making it a kind of ultimate insider threat,” Rosiek told ISMG, adding that least privilege should be strictly enforced across both human and non-human identities. “Inherited trust relationships can unintentionally grant agents broad access, increasing the risk that agents attempt to bypass controls if permissions are too expansive or poorly scoped.”
The guidance urges organizations to treat agentic AI as part of their broader cybersecurity architecture rather than a standalone technology. That includes applying established principles such as least privilege, zero trust and defense in depth across the full lifecycle of agent deployment.
Officials recommend starting with low-risk, non-sensitive use cases, restricting agent permissions to the minimum required and implementing continuous monitoring of agent behavior, tool usage and decision-making processes. The report also calls for stronger identity controls, including cryptographic authentication for agent-to-system interactions, as well as segmentation strategies to contain failures and limit lateral movement between agents.
Human oversight remains a central control, particularly for high-impact actions. The guidance advises requiring human approval for sensitive operations, inserting checkpoints into agent workflows and maintaining the ability to interrupt or reverse agent actions in real time.
The agencies warn that existing governance models designed for human users may not translate cleanly to autonomous systems, creating gaps in accountability, auditing and compliance as agentic AI takes on more operational roles.
“Organizations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly,” the report says.
