Artificial Intelligence & Machine Learning
,
Data Loss Prevention (DLP)
,
Data Security
CEO Nitay Milner Says Large Language Models Cut False Positives and Policy Sprawl

A data security startup led by Cisco’s ex-cloud observability product leader raised $32 million to rethink DLP by leveraging large language models rather than traditional policy-based systems.
See Also: OnDemand | Transform API Security with Unmatched Discovery and Defense
The Norwest-led Series A funding round will help New York-based Orion Security apply artificial intelligence to understand organizational context, user behavior and data sensitivity in real time, said co-founder and CEO Nitay Milner. He said Orion’s road map centers on protecting both humans and AI agents, expanding into data security posture management over time and helping companies adopt AI securely.
“We closed very large deals recently with a couple of Fortune 500 companies, which made us very visible and very interesting for VCs,” Milner told Information Security Media Group. “And once we got attention from customers because of our product and because of technology, we actually got a few proposals from VCs that want to lead the next round.”
Orion, founded in 2024, employs 34 people and emerged from stealth in March 2025 with $6 million of seed funding led by Pico Partners and FXP. The company has been led since its inception by Milner, who was a product manager at application monitoring startup Epsagon. After Cisco bought Epsagon for $500 million in October 2021, he became Cisco’s product leader for cloud observability until 2023.
Why a Policy-Based Approach to DLP No Longer Works
The company’s Series A round will support sustained growth across R&D, product innovation and go-to-market execution, Milner said, ensuring Orion can meet rising demand without compromising product quality or customer experience. Lead investor Norwest was chosen not just for capital, but for strategic alignment, network strength and operational support, Milner said.
“This is what we need to support expansion, both in R&D, in product, everything that relates to the technology that we’re building and also go-to-market,” Milner said. “So, anything around marketing and selling and supporting a better service for our customers.”
Policies must be written, tuned, updated, retired and rewritten as the business evolves, and in large enterprises, this maintenance effort alone can require a dedicated team of several people, Milner said. Policies also lack situational awareness and cannot reason about intent, context or business relevance, meaning they generate massive volumes of false positives, he said.
“Policies are deterministic, meaning it’s a one or a zero,” Milner said. “You give it a use case, it doesn’t have any context. It will block it or allow it, and the content doesn’t have context. And since it’s deterministic, it creates a lot of false positives.”
Instead of forcing customers to define thousands of granular policies, Milner said Orion uses AI to understand how data is actually used within an organization. Orion assumes that context and intent matter more than static rules, and the company’s platform is designed to behave like a human security analyst who understands the organization, its data and its workflows at machine scale, Milner said.
“There’s a couple of things that customers really don’t like,” Milner said. “One is that they have to create all these policies, and these policies can be in the thousands. The second thing is you need to refine them, because things change, compliance changes. And the third problem with it is that it creates a lot of false positives. And it’s known that in DLP tools today, around 90% of the alerts are false positives.”
How Organizational Context Helps Determine Risky Behavior
Orion combined open-source components with internally developed models designed specifically for data security to understand organizational context rather than just recognize data patterns, Milner said. By observing how data is used internally – who accesses it, how it moves and in what contexts – Orion’s model develops an understanding of what constitutes normal versus risky behavior, Milner said.
“What this LLM is doing is basically learning the context of the company,” Milner said. “You can feed it and explain to it what is important for the company, what type of data and it learns how people use sensitive data in the organization. And without needing to define any policies, you can start to protect your data on day one.”
One immediate and widespread issue is employees unintentionally exfiltrating sensitive data into AI tools, often cutting and pasting proprietary information, customer data or unreleased content into tools such as ChatGPT. Looking ahead, AI agents embedded inside enterprises will send emails, manipulate data, generate documents and communicate with external systems at a scale no human can match, he said.
“If you’re a reporter working for news media, and you have an unpublished article that is super-secret and you’re uploading it to ChatGPT and asking to rewrite it, this is super risky for the company,” Milner said. “It’s a third-party, unmanaged AI application that now has your most sensitive data. And it’s not only files; copy-paste and prompts can contain customer data, PII, PCI. There has to be new solutions.”
Orion’s long-term vision involves analyzing indicators of data loss such as destination, recipient identity, competitive context and data sensitivity to determine whether an action should be allowed, he said. For example, if an AI agent attempts to send a customer list to a rival, Orion’s system would recognize both the nature of the data and the business relationship involved, and block the action in real time, he said.
“The only thing that can hold the scale of an LLM is another LLM,” Milner said. “So, this LLM should look at different indicators of data loss. The more context that it gets, the more understanding of the organization, the better decision it could make. It should get to a verdict in real time by the context and the data that it had, and basically fight AI with AI.”
