Artificial Intelligence & Machine Learning
,
Geo-Specific
,
Next-Generation Technologies & Secure Development
EU AI Regulation May Hold Implications for Powerful New Anthropic Model

Anthropic jolted the tech and policy worlds this week with it announcement of Claude Mythos Preview, an artificial intelligence model that it’s only going to release to tech vendors, so they can use its strong bug-finding and exploiting capabilities on their wares before attackers get the chance.
See Also: Agentic AI and the Future of Automated Threats
This limited-exposure program, called Project Glasswing, so far includes companies such as Apple, Microsoft and Cisco, plus 40 other organizations that “build or maintain critical software infrastructure,” Anthropic said, adding that it had also talked to the U.S. government about the model. But Europe’s leaders – who recently passed legislation that affect Anthropic’s strategy with risky systems such as this – are also taking a keen interest.
“We are currently assessing possible implications in light of EU policies and legislation,” European Commission spokesman Thomas Regnier told ISMG in an emailed statement. “We are also monitoring the security implications of this rapidly evolving technology – for both increasing our cyber defenses and possible misuse.”
Mythos Preview was Anthropic’s first model announcement since the company overhauled its “responsible scaling policy” in February, dropping a pre-existing pledge to stop training and avoid releasing models if it can’t reliably mitigate the risks that they pose. At the time, chief scientific officer Jared Kaplan told Time that it no longer made sense to hold back unilaterally “if competitors are blazing ahead.”
Even with that policy shift and the lack of anything to fear from federal AI regulation in the United States, Europe’s new AI rules have plenty to say on the matter.
There are two particular documents that Anthropic and other “general purpose AI” vendors need to pay attention to when developing and releasing risky models. One is the AI Act, the relevant parts of which went into effect last August. The other is the AI code of practice, published in July, giving the industry a steer as to AI Act compliance. Pledging adherence is voluntary, and Anthropic is one of the companies that did so.
Anthropic may say in its system card for Mythos Preview that “current risks remain low” – a judgment that’s largely based on its lack of prowess in aiding chemical and biological weapons production or making huge strides in research and development automation – but it seems likely that the model poses a “systemic risk” under the wording of the AI Act, which says that label could apply in cases where there’s a risk of disruptions to critical sectors, or of “reasonably foreseeable negative effects on… public and economic security.”
Per the code of practice, that likely means Anthropic couldn’t legally give Mythos Preview a full European release without first implementing sufficient safety and security mitigations, to the point where the risk becomes acceptable.
“AI and cybersecurity are closely intertwined,” said Regnier. “And whilst it is clear that AI provides groundbreaking solutions for cybersecurity, such models need solid research and testing before they are placed in the market so as to ensure adequate checks and balances and avoid other potential security risks they may generate or misuse by malicious actors.”
The commission spokesman also pointed out that the AI Act and the soon-to-be-implemented Cyber Resilience Act require Anthropic to have a “strong level of cybersecurity protection” for the models themselves (see: Europe Girds for Looming IoT Security Regulations).
Europe’s AI code of practice obliges its signatories to draw up a safety and security framework for the models they are developing, using or making available and to give the European AI Office – a new department of the European Commission – unredacted access within five working days of the framework being confirmed. The commission has not given any details about Anthropic’s compliance on this front.
At least one European government agency has also been talking to Anthropic about Mythos Preview and seems to have come away with more questions than answers.
“We are in active dialogue with Anthropic, the makers of Claude Mythos,” said Claudia Plattner, president of Germany’s Federal Office for Information Security or BSI, in an emailed statement. “While we have not yet had the opportunity to test the tool directly, our conversations with the developers have given us meaningful insight into how it works. In short: we take these announcements very seriously and anticipate significant disruption – both in how security vulnerabilities are handled and in the broader threat landscape.
“Taken to its logical conclusion, we may reach a point in the medium term where unknown, classical software vulnerabilities simply cease to exist. This would trigger a fundamental shift in attack vectors and represent a paradigm change in the nature of cyberthreats. It also raises a pressing question: Whether – and if so, for how long – tools of such extraordinary power will remain available on the open market? That question, in turn, has profound implications for national and European security and sovereignty.”
In its Glasswing announcement, Anthropic said it was having “ongoing discussions with U.S. government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities.” Multiple reports on Friday stated that the U.S. government had convened urgent meetings with Wall Street leaders this week over the Mythos threat.
Anthropic’s announcement also noted that “securing critical infrastructure is a top national security priority for democratic countries,” adding that governments have “an essential role to play” in “both assessing and mitigating the national security risks associated with AI models.” Beyond that, it did not say anything concrete about its discussions with non-U.S. governments.
Sven Herpig, cybersecurity lead at the European tech policy think tank Interface, told ISMG on Friday that most European governments would likely reach out to Anthropic to better understand how powerful Mythos Preview is, and to verify the company’s claims. He said they were unlikely to ask to use it to test the security of their own systems at this point, as “governments are not realy producers of source code” – and the biggest software makers whose products they use are already testing those products under the auspices of Project Glasswing.
