Agentic AI
,
AI-Driven Security Operations
,
Artificial Intelligence & Machine Learning
Opaque Decision-Making, Lack of Guardrails, Poor Auditability are Risks

The dream of replacing burned-out SOC analysts with autonomous AI agents is as premature as it is persistent.
See Also: On Demand | Global Incident Response Report 2025
Agentic artificial intelligence is a class of systems that can independently reason, decide and act across a series of tasks, moving beyond traditional automation to behave more like autonomous coworkers. Cybersecurity leaders are finding that deploying such tools inside security operations centers may do less to eliminate toil than to shift it.
“Agentic AI, especially, is still being developed and is not ready for primetime in any field, let alone security,” said Allie Mellen, principal analyst at Forrester. “If the AI agent provides a wrong judgment on an alert, it will lead to a missed attack. The more incorrect outputs from the AI agent, the more mental toll on the analyst – it will basically become a new version of a false positive, just for triage and response.”
Rather than reducing workload, some agentic AI implementations have simply changed its shape, from triage to close monitoring. “It’s a shift, not a reduction,” said Chad Cragle, CISO at Deepwatch. “Security teams still have to validate every decision the agent makes. You’re not eliminating the human, just inserting a layer of abstraction between the analyst and the task.”
That abstraction layer introduces new risks. Large language models, which underpin many agentic systems, excel at generating responses that sound authoritative – even when they’re wrong.
“LLMs are trained to predict what ‘sounds right’ based on patterns in their training data, not to fact-check,” said Ophir Dror, co-founder of agent-based security startup Lasso. They’ll generate plausible-sounding responses even when the underlying data is ambiguous or conflicting, he added.
In a recent GitHub Copilot study, nearly 20% of AI-suggested software libraries didn’t exist – they were hallucinated by the model. In cybersecurity environments, Dror said, similar hallucinations can result in missed detections, invalid indicators of compromise or incorrect remediation steps. Consistent hallucinations can create blind spots in the detection pipeline.
Many organizations are deploying agents without a clear understanding of their memory, autonomy or access boundaries, Dror warned. Some developers spin up agents that quietly live inside internal tools or SaaS products – agents that learn from inputs but aren’t rigorously governed. This opens the door to memory poisoning, where an attacker injects bad information that the agent stores and reuses later, he said.
He believes the safest use of autonomous agents in a SOC is when they have read-only access. “Assistants are less risky,” he said. “The more ‘write’ actions the agents have, the riskier the deployment becomes,” he said, because they’re interacting with production systems, not just information.
Mellen echoed the sentiment. “The most responsible use case for AI agents in the SOC today is as a triage agent,” she said, surfacing what matters most and presenting analysts with summaries or recommended actions.
Dror added that trust in agentic AI depends on how the agent is deployed. It hinges on what kinds of decisions the agent is empowered to make and whether appropriate safeguards are in place.
The non-deterministic nature of large language models complicates reproducibility and transparency, both essential for testing and auditing in security operations.
Security guardrails often lag behind. Dror said that some agentic AI systems are deployed without encryption, identity validation or even basic logging, controls that would be mandatory for any human analyst.
Experts warn that when AI systems act autonomously, explainability gaps can complicate forensics, compliance and liability efforts. And the marketing hype doesn’t help. “The biggest myth is that agentic AI will replace human analysts,” Cragle said.
Performance expectations can also be misleading. If an AI agent is underperforming, it might not be the model’s fault, Patrick Tiquet of Keeper Security said. Companies may need to look at the training data, the fine-tuning process, or even how they’re framing the problem.