Artificial Intelligence & Machine Learning
,
Geo-Specific
,
Next-Generation Technologies & Secure Development
Mythos a Turning Point, Say Lawmakers in Missive to European Commission

Dozens of European lawmakers are pressing the European Commission to act quickly to protect the continent’s cybersecurity, due to the advent of new artificial intelligence models that have considerable hacking prowess.
See Also: Agentic AI and the Future of Automated Threats
AI models such as Anthropic’s Mythos – which the company is holding back from general release – “are fundamentally altering the cyberattack landscape,” members of the European Parliament wrote in a Monday missive. Their letter, addressed to Henna Virkkunen, the European Commission executive vice-president with responsibility for tech sovereignty and security, carried signatures from members across the political spectrum, with the exception of the far right.
“A race against time has begun, and Europe is not prepared,” the members wrote. Reports of unauthorized parties gaining access to Mythos “underscore that this threat is no longer hypothetical.” They complained that no EU institution had gained access to Mythos, which Anthropic is restricting to hand-picked organizations gathered under the rubric of “Project Glasswing.”
The commission had not responded to a request for comment at the time of publication. Spokesman Thomas Regnier confirmed at a Monday press briefing that the EU institution still doesn’t have access to Project Glasswing. “There are clearly some cybersecurity concerns that have to be addressed, and I can tell you that the company is engaging in good faith with the commission, but I will not speculate about potential future access or not,” he said.
European preparedness is not just about Mythos, the lawmakers wrote. “Many other capable AI models are emerging and open-source equivalents such as Kimi K2.6, when combined with agentic systems, are poised to lower the barrier to sophisticated attacks even further. Public services and critical infrastructure across Europe face risks of a scale and speed we have not previously encountered.”
Markéta Gregorová, a Czech Pirate Party representative who co-authored the letter and is shepherding the European Parliament’s work on the current Cybersecurity Act revision, told ISMG that it is essential for the EU to avoid falling behind.
“We urgently need a European plan to mitigate AI-driven cybersecurity threats and to bring together our companies, institutions and the European cybersecurity agency ENISA to develop effective defensive solutions,” she said Tuesday.
The letter calls on Virkkunen to ensure that Europe participates in Project Glasswing, to “accelerate adoption of zero trust architectures, assume-breach principles and AI-assisted defensive tools, with concrete guidance for both public institutions and private enterprises,” to reform vulnerability disclosure policies and patching frameworks to “reflect compressed AI-driven timescales,” and to “promote immediate reduction of attack surfaces, further network segmentation and prioritizing the protection of crown jewels.”
The letter also stresses that it is not a “call for restrictions on the use of AI-powered cybersecurity measures in Europe as they are equally indispensable to our defense.”
Anthropic has not responded to a request for comment on why the EU is not yet participating in Project Glasswing.
According to a Bloomberg report, Kyriakos Pierrakakis – president of the Eurogroup, which comprises finance ministers from the EU countries that use the euro currency – warned Tuesday that frontier AI models “may soon present challenges of a potentially systemic nature.”
The advent of Mythos has rattled governments around the world – including in the U.S., where the second Trump administration had so far chosen an entirely hands-off approach to AI regulation.
The New York Times reported Monday that Trump is now considering the reintroduction of government oversight, with “potential plans” including “a formal government review process for new AI models,” perhaps based on the British model of tasking government bodies with ensuring that AI models meet safety standards.
The administration announced that Google DeepMind, Microsoft and xAI had agreed to give the Center for AI Standards and Innovation – part of the Department of Commerce’s National Institute of Standards and Technology – access to models before they become publicly available, to allow for “pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security.”
The U.K.’s AI Security Institute did get early access to Mythos Preview, allowing it to perform cybersecurity evaluations. It said in mid-April that the model was “at least capable of autonomously attacking small, weakly defended and vulnerable enterprise systems where access to a network has been gained,” but expressed uncertainty about whether Mythos could “attack well-defended systems.”
On Tuesday, it emerged that England’s National Health Service had locked down its open-source projects in response to the advent of Mythos and AI models like it. As first reported by The Register, the NHS has issued guidance to its tech managers, telling them to set their GitHub repositories to private by the start of next week.
“We are temporarily restricting access to some NHS England source code to further strengthen cybersecurity while we assess the impact of rapid developments in AI models,” a spokesperson told ISMG in an emailed statement. “We will continue to publish source code where there is a clear need.”
