Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Security First Framework Approach Focuses on Isolating Untrusted Inputs

Chatbots’ popularity has been tempered from the start by the prospect of prompt injection attacks – malicious actors hiding commands inside user inputs or documents, tricking models into carrying out unintended actions. Early demonstrations showed how easy it was to slip rogue directives into a language model’s context window – or its short term memory – causing it to drop guard rails and obey hidden orders. This insecurity is one of the reasons holding back the integration of LLMs into critical workflows such as email management, banking and scheduling, where a single misinterpreted phrase can become an existential threat (see: Meta Prompt Guard Is Vulnerable to Prompt Injection Attacks).
See Also: Securing Data in the AI Era
Efforts to head off prompt injection attacks have followed a familiar pattern: train another AI to spot and filter out injections, or wrap more layers of machine learning oversight around the core model. But probabilistic detection always leaves gaps.
A framework from Google’s DeepMind team aims to give AI systems the software engineering armor they’ve lacked. CaMeL, short for Capabilities for Machine learning, aims to take an alternative approach. Instead of granting an LLM full trust over every token it ingests, DeepMind fractures inputs into distinct, sandboxed components. It borrows from decades of software security doctrine, such as control flow integrity, access control and information flow control, and layers these principles atop the AI stack.
CaMeL splits responsibilities across two language models: Privileged LLM, or P LLM, a “planner” that only processes direct user instructions. It outputs code written in a locked down subset of Python, defining precise steps, such as call a function to fetch an email, parse content or send a message. Crucially, it never sees raw user data. Then there is the Quarantined LLM, or Q LLM, a “reader” that operates in isolation. It ingests unstructured content, such as email bodies or webpage text and converts it into structured values like extracting an email address. The Q LLM cannot invoke tools, write code or retain state. It toggles a simple Boolean flag to signal whether it has gathered enough information, preventing inadvertent data leakage back to the planner.
By chaining these modules through a secure Python interpreter, CaMeL tracks the provenance of every variable. If the system attempts to use data flagged as untrusted – like injecting a parsed email address directly into a send mail function – the interpreter’s data flow policies can block the action or prompt for explicit confirmation.
Independent AI researcher Simon Willison called CaMeL the “first credible prompt injection mitigation I’ve seen that doesn’t just throw more AI at the problem and instead leans on tried and proven concepts from security engineering, like capabilities and data flow analysis.”
In application security, “99% detection is a failing grade,” Willison said. “The job of an adversarial attacker is to find the 1% of attacks that get through. If we protected against SQL injection or XSS using methods that fail 1% of the time our systems would be hacked to pieces in moments.”
Web developers once battled SQL injection attacks by adding more detection layers. They ultimately won by changing the architecture. Prepared statements and parameterized queries rendered injection tactics obsolete. CaMeL aims to apply that same lesson to LLMs. It doesn’t rely on sniffing out every malicious snippet; it segregates untrusted inputs so they cannot act until they pass through clearly defined security checkpoints. This capability based architecture enforces the principle of least privilege, with each component gaining only the narrow access it needs.
DeepMind evaluated CaMeL against AgentDojo, a benchmark suite that simulates real world AI agent tasks alongside adversarial attacks. The results reportedly show high utility in routine operations, such as parsing emails and scheduling reminders, while resiliently fending off injection exploits that have flummoxed earlier defenses.
CaMeL “effectively solves the AgentDojo benchmark while providing strong guarantees against unintended actions and data exfiltration,” the DeepMind researchers said.
Beyond injection mitigation, the team argues CaMeL’s approach can bolster defenses against insider threats and malicious automation. By treating security as a data flow problem rather than a cat and mouse detection game, the framework could prevent unauthorized exports of sensitive files or stop rogue scripts from exfiltrating private data.
CaMeL marks a significant conceptual advance, but it comes with some tradeoffs.
It shifts some complexity onto users and administrators, who must codify security policies and maintain them over time. Too many confirmation prompts risk habituating users to click “yes” reflexively, eroding the very safeguards they’re meant to enforce. As Willison has discussed since coining the term “prompt injection” in September 2022, the core vulnerability stems from mixing trusted and untrusted text in one processing stream, a design flaw still unresolved by monolithic LLMs.