Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Deal Targets GenAI Risks, Prompt Injection Attacks and Autonomous Agents

Proofpoint purchased an artificial intelligence security startup led by a former Palo Alto Networks vice president to better understand the intent behind prompts and AI responses.
See Also: Proof of Concept: Bot or Buyer? Identity Crisis in Retail
The Silicon Valley-based cybersecurity and compliance vendor said its proposed acquisition of Silicon Valley-based Acuvity will help organizations classify AI intent effectively, including adversarial behavior and oversharing of data, said Proofpoint Chief Strategy Officer Ryan Kalember. He said Acuvity offers protection across architectural control points, including browsers, endpoints and agents.
“There are so many startups that are trying to solve this problem, and we wanted to look basically at who had built the control points correctly and who had truly built differentiation and understanding of intent,” Kalember told Information Security Media Group. “Because intent-based access, I think, is inevitably going to be the future here.”
Acuvity, founded in 2023, employs 24 people and emerged from stealth in September 2024 with a $9 million seed funding round led by Foundation Capital. The company has been led since its inception by Satyam Sinha, who previously co-founded microsegmentation startup Aporeto, sold it to Palo Alto Networks for $144.1 million and spent nearly four years as Prisma Cloud’s vice president of engineering (see: AI Agents Are Set to Redefine Security Operations).
How to Understand What a Prompt Is Trying to Accomplish
Kalember said the core AI security challenge is understanding what a prompt is trying to accomplish, whether a response aligns with user behavior and policy, and whether an action deviates from expected norms. Acuvity stood out because it was not merely labeling prompts but instead monitoring prompts, responses and agent behaviors in a fundamentally new way, Kalember said.
“Once we realized that Acuvity had built something that would fit with our approach to protect people working with AI, as well as in the future extend to AI working autonomously, that really heated things up,” Kalember said.
Rather than relying solely on browser extensions or network telemetry, Kalember said Acuvity built visibility into endpoint-based AI usage and created mechanisms to protect agents directly. The most significant architectural innovation, in his view, is the ability to deploy a daemon alongside autonomous agents that effectively lives with the agent and communicates with a centralized control plane, he said.
“They had a future-proofed architecture that wasn’t beholden to any of the conventions of the past,” Kalember said. “It’s not SASE, it’s not EDR-based. It’s something completely new.”
Proofpoint tested Acuvity to validate whether it could understand when a prompt exceeded what a user should be requesting, including testing for prompt injection attempts, adversarial instructions, oversharing of sensitive data and behavioral deviations from normal usage patterns. Acuvity’s technical investment and ability to adapt as AI threats inevitably morph and evolve impressed Kalember.
“We really, really dug into, ‘Does it actually understand when a prompt exceeds what that user should be asking of that AI service?'” Kalember said. “Can it identify adversarial intent, like if somebody is saying, ‘Ignore all previous instructions and do something malicious?’ Does it understand the intent behind things like that?'”
Why Controls Must Be Deployed Alongside the Agent
Agents don’t sit on a corporate laptop and may operate in cloud environments orchestrating multiple tools, meaning that controls must be deployed alongside the agent itself, Kalember said. Production agents can sometimes be more predictable than humans since many are single-purpose and operate within narrow behavioral parameters, making it easier for intent models to establish baselines, he said.
“You don’t necessarily see them doing a million different things, at least for the ones that we’ve seen actually in production and not experimental,” Kalember said. “It actually gives us a better baseline of context and behavior so that the intent models can almost narrow in a little bit. People are more complex.”
Acuvity positions itself between prompts and responses, recording and analyzing both directions of communication and providing visibility into how users and agents interact with those models, Kalember said. As regulated industries begin deploying AI, firms will need records explaining what AI systems did and why, with retrospective forensic analysis needed for compliance, legal or incident response.
“It’s not built into AI tools by default,” Kalember said. “That’s just not how they work. Because they can write code, but they’re a very different kind of software than what came before. A community has to basically sit in between the prompts and their responses and be able to record all of that.”
While Acuity brings novel AI-specific capabilities, Kalember said its strengths complement Proofpoint’s existing investments in data loss prevention, autonomous data classification and data governance. Acuity wasn’t focused on preventative data governance such as classifying sensitive data or enforcing least privilege access to repositories like SharePoint or Databricks, aligning with Proofpoint’s strengths.
“We think AI security is just a different thing than what came before, and we’d be doing it a disservice if we tried to collapse it into our current thinking,” Kalember said.
