Infrastructure as Code / Programmable Infrastructure
,
Next-Generation Technologies & Secure Development
Trag’s Developer-Centric Tools Help Aikido Slash Time to Market by 12 Months

Aikido Security purchased a code quality startup led by an Armenian serial entrepreneur to enable more natural and intuitive methods for code analysis and review.
The Ghent, Belgium-based startup said its acquisition of Trag was driven by the realization that AI has revolutionized code quality, making traditional pattern-matching systems obsolete, according to CEO Willem Delbare. Trag’s approach allows rules to be written in plain language and evaluates code like a developer would, assessing logic, readability, and performance beyond just security flaws, Delbare said.
“A lot of companies are being rebuilt from the ground up in a way more AI-native way,” Delbare said. “I think code security is one of the places where AI has the most devastating impact. Our competitors essentially spent 20 years making hundreds of rules per language, resulting in 20,000 patterns have to be checked to see if a piece of code is not great. But those 20,000 rules do not catch everything.”
Trag, founded in 2024, employs four people, and has been led since the start by Khachatur Virabyan, who served as a front-end engineer at Teamable prior to its acquisition by Talkdesk, co-founded design system startup Stylebit, and taught introduction to computer science at the American University of Armenia in Yerevan (see: AutoCodeRover Deal to Drive Sonar’s AI-Powered DevOps Growth).
How Trag’s Accelerated Aikido’s Time to Market By 1 Year
Code review tools traditionally relied on massive libraries of pattern-matching rules to flag potential vulnerabilities or bad practices in code, but Delbare said these systems were rigid, hard to maintain, and plagued by false positives and negatives. AI replaces these thousands of rules with a drastically smaller set that are defined in plain language, written by non-experts, and don’t require technical encoding.
“Right now with AI, it’s essentially reduced to maybe 50 rules instead of 20,000, and those rules are written in English, instead of these hard-to-read or hard-to-write patterns,” Delbare told Information Security Media Group. “So it is one of those areas where we had to adopt very, very fast.”
Rather than building their own solution from scratch, Delbare said Aikido opted to acquire Track, which had already spent a year building AI-native code review capabilities. What made Track stand out was its sophisticated use of repository-wide knowledge and agentic behavior, with AI mimicking how a human reviewer would navigate and understand code across multiple files, not just the ones that changed.
“It accelerated our time to market probably by 12 months,” Delbare said. “So that means we can essentially go after deals that are in our sales pipeline. We can compete with them right now – instead of next year – with a product that is both cheaper and better.”
Buying Trag means Aikido’s customers can now activate base code quality checks with a single click and even define their own rules in natural language, he said. Rather than building their own large language models, Aikido chose to create robust benchmarks to evaluate model behavior, which Delbare said allows the company to switch from OpenAI to Anthropic to another provider with minimal disruption.
“The next goalpost is probably scaling it, because the main bottleneck in these kind of AI applications is literally asking Amazon for access to more GPUs so you can serve all your customers,” Delbare said.
Why AI-Native Code Reviews Go Beyond Security Issues
AI-native code review goes beyond identifying security issues like SQL injections to offer feedback on logic bugs, readability, naming conventions, and even subjective elements like code comments. He said this shift broadens the scope of code review beyond just security into overall code quality since AI can now critique stylistic and structural choices that influence how easy code is to understand and modify.
“It sort of behaves like a senior developer would, where they would coach a more junior developer to say, ‘Hey, maybe you should name this function a little more different, so that in the future, it will be more clear what what it was supposed to do,'” Delbare said. “I think that’s a good example.”
Delbare said Aikido differentiates itself not through proprietary AI models, but through its ability to ship new features rapidly and respond closely to customer needs. Aikido’s edge lies in its size and speed, Delbare said, with lean product development processes similar to those of Stripe, and far removed from what he describes as the slower, more bureaucratic workflows of enterprise giants like Visa.
“At the end of the day, everyone can probably use the same models from Anthropic or OpenAI,” Delbare said. “So in theory, everyone has the same tools. So it’s about speed of iterating on your product, talking to your users, seeing what they actually want, and then reacting to that.”
If a developer receives a comment on their code and then modifies it in response, that’s evidence the comment was helpful, relevant, and actionable, while if the feedback is ignored, then the tool has failed to add value. Trag’s AI-native capabilities expand the range and relevance of comments to identify unused test code, logic risks or bad naming conventions, making them more likely to be acted on.
“The key metric that you want is the amount of comments where developers agree that it was actually worth fixing,” Delbare said. “So it’s a comment where you see that the next action of the developer is to make it a little change that makes the comment go away. So if there is a comment that we place on a piece of code and they don’t care and they move on, then that’s kind of a failure.”
