Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Startup Simulates Offensive and Defensive AI to Test and Thwart AI-Based Threats

An AI security lab led by a former IBM AI researcher raised $80 million to develop test environments that mimic real-world attack and defense scenarios.
See Also: AI Agents Demand Scalable Identity Security Frameworks
San Francisco-based Irregular will use the Series A investment will primarily support hiring world-class talent, scaling compute resources and translating its research into deployable tools for model creators and enterprise adopters, said CEO Dan Lahav. These simulations are being used by top-tier AI labs such as OpenAI, Anthropic and DeepMind, and are also being adapted into enterprise-ready products.
“We have a high fidelity research platform that is being used by the top AI companies in the world that allows them to enter any model into our platform in order to run high-fidelity simulations on the model, both attacking the model and using the models to attack other actors,” Lahav said. “For example, can they evade detection by EDRs? For a lot of actors, gaming the visibility of model capabilities matters.”
Irregular – formerly Pattern Labs – was founded in 2023, employs 20 people, and tapped Sequoia Capital and Redpoint Ventures to lead its Series A round. The company has been led since its inception by Lahav, who spent five years in IBM’s AI Research division as a researcher, served as Tel Aviv University lecturer, and was the chief adjudicator of the 2021 World Universities Debating Championships in South Korea (see: Vega Secures $65M to Scale SecOps, Take On Traditional SIEMs).
From Research to Product
The company has built simulation environments where models can be tested both as attackers and as potential victims, with real-world scenarios including lateral movement, EDR evasion, and ransomware-like behaviors replicated. AI labs use Irregular’s platform to assess its own models before deployment, while Irregular takes learnings from these simulations and turns them into next-generation defenses.
“We need to now defend in a new paradigm that requires a research-led effort,” Lahav said. “Research-led efforts are expensive. It requires the best research minds in the field, both on the AI security side and computer science side. The pace of changes in AI now are so rapid and are happening across so many different points across the stack that it requires a very proactive and research-led approach.”
Part of the funding will be used to take Irregular’s internal research and translate it into deployable, scalable products, balancing the spirit of a lab with the product-oriented rigor required to support enterprise customers. The company seeks to build systems that can not only identify vulnerabilities but create the next generation of AI-native defenses, which requires a corresponding scale of investment.
“We actually have a few of the best cryptographers in the world and AI researchers in the world, and we want to have many more of these in order to ensure that we can work at the frontier all of the time,” Lahav said. “We’re intending with the money to build implementations of what we’ve done so far at the frontier, and greater versions of these that are going to be relevant to any deploy of AI in the world.”
On the offensive side, AI models are increasingly capable of performing sub-tasks in real-world attacks, but still struggle with more complex, multi-step operations that require persistence over time, CTO Omer Nevo said. Even basic jailbreaks or prompt injection attacks on models are still relatively trivial to execute, meaning that models are improving as attackers faster than they are being secured, Nevo said.
How the Needs of Model Creators, Deployers Differ
Irregular’s simulations start with known cyberattack vectors such as lateral movement across a network or ransomware payloads, but instead of a human attacker, the simulations use an AI model or AI-assisted actor as the threat vector, Nevo said. These simulations reveal previously unknown behaviors and vulnerabilities that can then inform both offensive threat modeling and defensive design, Nevo said.
“Attacking models currently, things are open and easy, and even things like finding new jailbreaks or prompt injections or techniques to get around guardrails is still something that is, to be honest, not very hard for non-experts to be able to do,” Nevo told Information Security Media Group.
Model creators need tools that can test the full spectrum of a model’s capabilities, since their concerns can include whether a model can solve math problems, generate creative text or evade antivirus, Lahav said. Model deployers are focused on narrower applications such as automating compliance workflows or summarizing patient records. These use cases are limited in scope, but the depth of scrutiny is greater.
“If you’re a model creator, you’re pushing models to the extreme, testing if they can leak data and evade AV detection, you need very robust monitoring software,” he said. “But that version is highly relevant to banks or hospitals adopting these models. Because if now you have AI agents getting more autonomy and are stochastic, then you need base versions that allow you to monitor what these models are doing.”
Irregular is preparing to commercialize its technology for broader use, with Lahav envisioning a version of the platform that can be adopted by any enterprise deploying AI from hospitals to banks. This could help detect if an internal AI agent is leaking data or violating protocol. The commercial versions of Irregular’s tools will offer monitoring and detection for users who aren’t building models from scratch.
“The same environments that allow you to assess whether models are capable of doing something which is problematic are the same environments that allow you to understand what the next generation of defenses should look like,” Lahav told Information Security Media Group. “We’re creating versions of these that are going to be relevant to any deployer in the world.”
