Governance & Risk Management
,
Identity & Access Management
,
Multi-factor & Risk-based Authentication
As Agentic AI Takes Over Workflows, Traditional Authentication Practices Fall Short

The explosion of agentic artificial systems and autonomous bots to orchestrate cross-system tasks is turning multifactor authentication into a brittle defense. Non-human identities often bypass human-centric security controls, operating with static credentials and undefined ownership, creating exploitable identity risks.
See Also: Identity and Access Management (IAM) Market Guide 2025
Security frameworks may have evolved to recognize non-human agents, but traditional access tools have fallen behind. Experts warn that continuing to rely on MFA as a universal fix undermines even the strongest zero trust strategies.
Traditional MFAs are designed around human behaviors and hinge on something you know, something you have or something you are. “Bots operate without an interface,” said Reuben Athaide, global head of cybersecurity assessment and testing with Standard Chartered in Singapore. “They execute tasks programmatically, with no human in the loop to tap approve on a push notification.”
In fact, service accounts often bypass MFA altogether, instead relying on static, long-lived credentials. These credentials persist quietly in infrastructure and are often undocumented. Over time, it becomes a risk that enterprises are often afraid to fix.
Rajdeep Ghosh, chief technology officer with pharmaceutical company Dr Reddy’s Laboratories, said the problem arises because of the way organizations treat the bots. “We treat bots as technical artifacts, not identities. That mindset leads to static credentials and implicit trust dangerous in today’s zero trust world.”
Governance challenges of non-human identities go beyond authentication. Non-human identities, unlike their human counterparts, do not leave when a project ends or an employee quits. Without lifecycle policies such as expiry, ownership or de-provisioning, bots can persist indefinitely, often with elevated privileges.
“Privilege creep is real,” Ghosh said. For example, “a bot initially created to process invoices might eventually gain database read access or customer PII permissions without formal review.” In highly regulated sectors including healthcare and finance, an orphaned bot poses not just a security concerns but a compliance nightmare.
“Without tagging, attestation, or metadata enforcement, they become invisible attack vectors,” Athaide said.
The solution? Treat bots like first-class citizens. Every service account must have an owner, a purpose and a defined scope. Access should be role- or attribute-based, never static. De-provisioning should be tied to events such as project closure or lack of activity. And all of this, experts said, must be codified through infrastructure-as-code and automated pipelines.
Rather than retrofitting human-centric MFA into machine workflows, the industry should move toward automation-native alternatives, Athaide said. “This includes machine-native identity models, where authentication is built around workload context, cryptographic trust and runtime signals – and not push notifications or OTPs,” he said.
Shakeel Khan, regional vice president and country manager at Okta India, said AI agents are increasingly connecting across applications, automating tasks and accessing sensitive enterprise data. “We need centralized identity layers that enforce short-lived, context-aware access tokens governed by enterprise policies,” he said. This vision is being realized through solutions such as Cross App Access and Auth for GenAI, which enable agent-to-agent authentication across services such as Gmail and Slack.
Approaches including workload identity federation seen in models such as AWS IAM Roles Anywhere or Azure Managed Identity, anchor identity to runtime context rather than static credentials. Complementary technologies such as mutual TLS, SPIFFE and dynamic secret rotation ensure secure authentication without human intervention. “Frictionless doesn’t mean insecure,” Athaide said. “The goal is to shift from interactive friction to automated, policy-bound trust.”
Experts also bet on behavior analytics and identity threat detection, continuously evaluating whether a bot’s activity aligns with expected patterns.
Dev Wijewardane, field CTO at WSO2, warned that the fight is not only about human vs. bots but also about good bots vs. bad bots and normal bot behavior vs. anomalous bot behavior.
“For shared bots, it is vital to ensure role isolation is maintained and a bot acting for Department A isn’t accidentally or maliciously performing actions for Department B,” Wijewardane said. Maintaining strict role isolation is critical along with having unique identifiers per bot instance, strict credential rotation and logging every action, he said.
Looking Ahead: Multi-Assertion Authentication
Experts say multi-assertion authentication – granting trust through cryptographic attestation, behavioral analytics and real-time policy decisions – is the future for managing non-human identities. Under this approach, bots will have to prove every time that they deserve the access they have.
As enterprises scale AI and automation, clinging to human-centric identity models will only deepen risk exposure. The future lies in zero trust frameworks where bots are treated not as artifacts, but as governed identities, Wijewardane said.
“Bots must be governed like privileged human identities with complete audit trails, automated de-provisioning and granular access controls,” Khan said.