Cloud Security
,
Governance & Risk Management
,
Identity & Access Management
Lowering Machine Identity Risks in AI, ML and Bot Workflows

Imagine a city in the near future where self-driving cars navigate seamlessly, smart buildings adjust climate controls based on weather conditions and occupancy, and various robots autonomously perform countless tasks. The city runs smoothly because these machines, each with its own unique digital identity, constantly communicate with one another.
See Also: Student-Powered SOCs: Safeguarding Public Sector Digital Infrastructure
Now, picture the same city without adequate security measures: hijacked, with rogue vehicles causing chaos, buildings locking out workers and robots wandering into restricted areas. This scenario makes the double-edged sword of artificial intelligence and machine learning apparent.
Part 1 of this blog highlighted the importance of proactively managing machine identities across organizations. In Part 2, we will focus on the machine identities used in AI, ML and bot workflows. While these workflows help deliver efficiencies across organizations, they can also introduce identity security vulnerabilities that organizations should address long before futuristic smart cities become a reality.
AI’s Paradoxical Situation
AI and ML workflows are already transforming business operations. From inventory management and order fulfillment to managing workflows and software development, AI and ML increasing enhance efficiency – sometimes to the point of complete automation.
But as AI adoption increases, so does the attack surface. Every machine, whether a robot assembling products in a warehouse or a software-driven service on the internet, requires a unique identity. Machine identities are multiplying exponentially as organizations deploy new applications, devices, tokens and scripts to automate tasks. Unfortunately, many commonly used AI and ML tools have proven vulnerable, and cybercriminals can quickly exploit those vulnerabilities. Even AI tools that manage machine identities may leave organizations at risk if they do not enforce the right policies.
Security Risks in AI, ML and Bot Workflows
AI, ML and bot workflows can introduce three common security risks that many organizations today are not yet prepared to mitigate.
- Over-permissioned identities: For decades, excessive user privileges have posed security risks, making it easier for threat actors to move through a network once an identity is compromised. The same issue can also impact machine identities automatically created for AI and ML workflows.
- Exploitable vulnerabilities: Popular online AI tools used for creating AI and ML models can be at risk if appropriate security controls are not implemented or if there is a known or unknown vulnerability being exploited by malicious actors. This can lead to data integrity issues, efficiency loss and potential data leaks, which may lead to financial and reputational damage.
- AI misuse: Cybercriminals are leveraging AI and ML to crack passwords, impersonate users, detect vulnerabilities and automate attacks against unsuspecting organizations. This creates security challenges for organizations already struggling to stay ahead of evolving threats.
Leveraging AI to Strengthen Security
Despite the risks, AI-driven automation can play a pivotal role in enhancing machine identity security in bot workflows, even when ownership and access are not available or current. For example, AI models can detect anomalies in machine identity usage, which is a crucial task considering that the number of machine identities in use today outnumbers human identities by 45:1.
AI tools help organizations detect unauthorized access, identify unusual behavior and find other patterns that may indicate a breach. They can also help organizations implement privilege-reduction strategies and enforce least privilege security controls both on-premises and in the cloud. Just-in-time access and zero standing privileges help reduce the attack surface of automated workflows by granting machine identities access to integral systems only for the duration required to complete an approved task. Once the task is complete, access is revoked, closing an important loophole in bot workflows that otherwise may be easily exploited.
Industry-Leading Practices for AI and ML Security
Deploying automated tools to monitor and manage machine identities provides a significant security advantage. Frequent automated scans can review and adjust permissions based on actual use and do so far more quickly than manual reviews by IT teams. This is particularly crucial for detecting identities generated by bot workflows operating behind the scenes or in a distant cloud environment, which, given the pervasiveness of machine identities in organizations, is common for many of them.
Solutions such as CyberArk’s Identity Security Intelligence enable organizations to detect and respond to suspicious activities automatically. This can help further mitigate risks associated with managing machine identities, as organizations can shift from constantly reacting to threats to developing proactive strategies to secure machine identities.
CyberArk is an identity management company that focuses on securing on-premises and cloud environments by automating the life cycle of digital identities and enforcing least privilege access. PwC collaborates with CyberArk to provide an array of professional services that help a wide range of organizations solve problems faster and maximize value. Together, CyberArk and PwC help organizations manage machine identities across on-premises and cloud environments, while also strengthening their defenses against rising cyberthreats.