API Security
,
Fraud Management & Cybercrime
,
Fraud Risk Management

LLM-powered applications are rapidly expanding the enterprise attack surface — but not in entirely new ways. At their core, these systems still rely on APIs.
What’s changed is how those APIs are used. LLMs and autonomous agents operate through chained API calls, creating high-volume, non-deterministic execution paths across cloud environments. This reduces visibility and weakens traditional control points.
As a result, AI security remains an API security problem — but with added complexity. Risks now include prompt injection, model misuse, shadow AI, and supply chain exposure, alongside challenges in managing data access, agent identity, and system behavior. Most legacy application security tools were not designed for this model.
In this session, you will learn:
- How threats such as prompt injection, model misuse, shadow AI, and supply chain attacks affect LLM-powered applications;
- Why limited visibility across AI-driven API interactions increases risk;
- How to apply API security principles to AI-native architectures;
- Approaches to improve discovery, testing, and protection of AI-enabled systems
