Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Rejects Business and AI Leaders’ Call for Two-Year Enforcement Moratorium

A European official said all existing timelines pertaining to new rules governing artificial intelligence remain in effect.
See Also: OnDemand Webinar | Trends, Threats and Expert Takeaways: 2025 Global IR Report Insights
The comments by a European Commission spokesperson followed a group of 50 business and technology leaders this week urging European officials to “stop the clock” on enforcing the EU’s AI Act. They warned the regulation could put “Europe’s AI ambitions at risk.”
Officials say they have no plans for any such pause. “There is no stop the clock, there is no grace period, there is no pause,” European Commission spokesperson Thomas Regnier told reporters at a Friday press briefing. “Why? We have legal deadlines established in a legal text.”
Calls to delay enforcement of the AI Act are intensifying, just weeks before key provisions, including rules governing General-Purpose AI, or GPAI, models are due to be enforced.
Last week, the Washington-based Computer & Communications Industry Association, a trade body representing IT and communications technology industries, urged the EU to rethink its timeline, especially because “essential guidance” for some requirements, such as the new GPAI rules, has yet to be published.
“Europe cannot lead on AI with one foot on the brake. With critical parts of the AI Act still missing just weeks before rules kick in, we need a pause to get the act right or risk stalling innovation altogether,” said Daniel Friedlaender, who heads CCIA’s European operations.
This week, 50 European business and technology leaders authored an open letter to Ursula von der Leyen, president of the European Commission, and other officials, urging them to “stop the clock” for two years on AI Act enforcement until the rules could find the right balance between regulation and innovation.
“Unfortunately, this balance is currently being disrupted by unclear, overlapping and increasingly complex EU regulations,” wrote the officials, hailing from such organizations as Airbus, BNP Paribas, Carrefour, Dassault Systémes, Lufthansa, Mercedes-Benz and TomTom.
“This puts Europe’s AI ambitions at risk, as it jeopardizes not only the development of European champions but also the ability of all industries to deploy AI at the scale required by global competition,” they said.
The AI Act is the first-ever rule banning the use of high-risk artificial intelligence applications, such as AI-driven emotion recognition, in the workplace and schools. Any potential violation could trigger a fine up to 35 million euros or 7% of an organization’s annual revenue (see: EU AI Act Enters Into Force).
Europe’s AI Act was passed on July 12, 2024, and is due to come into full effect, backed by enforcement and the potential for penalties, on Aug. 2, 2026.
In the phased implementation occurring before then, members states have also been instructed to designate a national authority for overseeing the AI Act, and to have detailed specific penalties and fines, as well as a system for ensuring they get properly implemented, to the European Commission by Aug. 2.
Rules covering GPAI also come into effect the same day, and include such key compliance requirements as demonstrating that systemic risks have been mitigated, as well as performing model evaluations, and complying with European copyright and privacy rules. Rules pertaining to high-risk AI systems don’t come into effect until August 2026.
Beyond warning that the law threatens to stifle innovation, some large firms, including Meta and Apple, have delayed the rollout of AI capabilities in the EU, citing the uncertainty posed by heightened regulatory scrutiny (see: Apple to Delay AI Rollout in Europe).
European data regulators currently have open probes against OpenAI’s ChatGPT, Meta and Grok AI to review their models’ compliance with privacy rules.
The Trump administration has signaled its dissatisfaction with the EU’s law. During the Paris AI Action Summit in February, technology leaders and White House officials called on the EU to relax the regulation, and argued that it disproportionately affected American technology firms (see: US VP Vance Calls for Less Regulation at AI Action Summit).
“We want to embark on the AI revolution with the spirit of openness and collaboration,” U.S. Vice President JD Vance said at the event. “But to create that kind of trust, we need international regulatory regimes that foster the creation of AI technology rather than strangle it, and we need our European friends, in particular, to look to this new frontier with optimism rather than trepidation.”
Brussels has been sending mixed signals over whether it might pause any parts of the AI Act. Last month, Henna Virkkunen, the EU’s technology minister, said that if the commission didn’t furnish standards guidelines by agreed-upon deadlines, then enforcement might need to be delayed.