Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
,
The Future of AI & Cybersecurity
More Code, More Problems – and More Testing

When Anthropic unveiled Claude Code Security late last month, investors were quick to punish traditional cybersecurity vendors. But the victims of that upset, like Palo Alto Networks and CrowdStrike, have since seen their share prices largely recover. And analysts say the impact of Anthropic’s new service will likely be more nuanced than indicated by early reactions.
See Also: Agentic AI and the Future of Automated Threats
Claude Code Security scans code for vulnerabilities and suggests patches, which the human operator can then choose to implement or not. It is not a standalone product but a feature of Claude Code, a tool that can function as a coding assistant – it’s widely seen as leading that field, particularly over the last few months – and also as an artificial intelligence agent that can run locally. For now, it remains in early preview (see: Why Claude Code Security Has Shaken the Cybersecurity Market).
Anthropic is pitching Claude Code Security as a superior form of automated security testing, compared with rules-based static analysis. “Rather than scanning for known patterns, Claude Code Security reads and reasons about your code the way a human security researcher would,” it claimed in its Feb. 20 announcement.
Human bug-finders shouldn’t be fearing for their jobs just yet, the analysts suggest. “CISOs would much rather reallocate headcount rather than eliminate it, so if it does cause disruption it will be in an attempt to reallocate staff to higher level responsibilities and less tactical work,” said Jeff Pollard, principal analyst at Forrester.
Duncan Brown, an IDC group vice president and European security research head, argued that services like Claude Code Security could have a positive impact on the industry by tackling rising numbers of vulnerabilities, without affecting either jobs or cybersecurity companies very hard.
“There is a role for better testing. It is undervalued massively, it is dull and there are very few specialists – the vendors that make software-testing applications and services companies that would do it as a third-party service,” he said. “So at face value, there is absolutely a case for automating this as much as possible. And companies have been trying to do this for a long time, but it does seem to be a good use case for AI.”
“Tools like Claude Code Security come along and they potentially change the game because they’re able to accelerate the rate at which you are able to test, and that gets you closer to that point where you think ‘okay, I trust the code,'” Brown said. “That’s the real test, if we see those declining over the next two to three years.”
Because the testing market isn’t “big enough or mature enough,” it is unlikely that the advent of Claude Code Security will do away with pure-play testing vendors because “there are always going to be use cases where you say, ‘I just need to check that what Claude has done is what I think it’s done,'” Brown added.
The cybersecurity companies that took a temporary hit following Claude Code Security’s announcement were largely publicly traded players that have made big investments in application testing over recent years. As for the more pure-play vendors, the release triggered a flurry of blog posts that attempted to play down the threat of Anthropic’s tool.
Veracode said in a post that Claude Code Security represented “a meaningful advance in how developers can get security insights earlier in the development process,” but was “not a replacement for a comprehensive application security program” because it doesn’t offer continuous scanning and governance, could not operate as “an enterprise level security policy-enforcement platform,” and could not produce deterministic, “compliance-ready results.”
Checkmarx, also an application security testing firm, argued that it’s Developer Assist tool remained worthwhile because it offers features that Claude does not, such as spotting infrastructure-as-code misconfigurations and providing validation that “security fixes don’t introduce regressions, break dependencies, or disrupt the build.”
Brown thinks it is unlikely that companies will now spend more on the market; if anything, they will try to see if the new tool can cut costs a little, being just a feature of Claude Code, which generally costs developers around $100 to $200 a month. Also in Pollard’s view, “the near-term disruption is budgetary” – but it is unlikely to have a major impact.
“If an enterprise has a tool that only performs static analysis, this is a threat to that tool. But few tools only perform static analysis anymore. Instead, these tools incorporate developer workflows, pipelines, governance and more,” Pollard said. “Claude Code Security does not encroach upon those areas in any capacity – at least so far.”
As such, Pollard said, Anthropic’s new offering won’t have any impact on SOC workflows that deal more with operational tasks around detection, investigation and response. “The market reacted to the economics and bundling threat, not because most SOCs can deploy this and change operations next week,” he said.
Pollard suggested the new capability was largely intended to “overcome adoption hurdles related to secure code generation and trust in code generation capabilities for Claude Code,” rather than being a signal that Anthropic was “greedily eyeing” a static application security testing market that is, after all, “relatively small compared to the addressable market for Claude Code and generative AI in general.”
This scans with the argument put forth by Checkmarx portfolio marketing chief Eran Kinsbruner in that company’s take on the rollout of Claude Code Security. “In an era where code is increasingly written by AI assistants,” he wrote, “velocity and scale have increased, risk has compounded and exposure scales faster than remediation. Claude’s announcement acknowledges this reality. And that’s a positive step forward.”
