Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Government Opts for Voluntary Frameworks Over Enforceable Safeguards

Three months after proposing mandatory artificial intelligence guardrails with regulatory teeth, Australia’s government earlier today released a national plan that asks companies to consider safety measures instead.
See Also: Going Beyond the Copilot Pilot – A CISO’s Perspective
The contrast between the proposal and the final plan reveals the scale of the government’s backtracking. In September, officials outlined 10 mandatory guardrails covering accountability, risk management, data governance, testing protocols, human oversight, transparency, contestability, supply chain visibility, record keeping and conformity assessments. The requirements would have applied to both high-risk AI deployments in sectors such as healthcare and law enforcement, as well as to all general-purpose AI models regardless of their eventual application.
Industry and Innovation Minister Tim Ayres and Assistant Minister for Science, Technology and the Digital Economy Andrew Charlton reportedly described the revised approach as “a whole of government framework that ensures technology works for people, not the other way around.” Rather than introducing AI-specific legislation, the government will rely on adapting existing laws in privacy, copyright and healthcare as needed.
The technical implications are substantial. The September framework would have required organizations to conduct documented testing before deployment and implement continuous monitoring to verify that the systems were fit for purpose. It established clear supply chain accountability, holding both developers and deployers responsible for compliance. The voluntary approach advises these practices but creates no mechanism to verify implementation or penalize failures.
Business groups welcomed the government’s decision. Sunita Bose, managing director of DIGI, which represents Meta, TikTok and Google, said the plan brings clarity and that realizing AI’s economic opportunities requires thoughtful regulation supporting responsible innovation. Bran Black, chief executive of the Business Council of Australia, said the plan charts a clear direction for using AI to boost productivity and competitiveness.
Damian Kassabgi, CEO of Tech Council of Australia, identified a persistent weakness despite supporting the regulatory balance. He singled out commercialization as Australia’s biggest challenge and called for specific actions including changes to superannuation investment rules and clarity on copyright settings.
The academic research community responded with sharper criticism.
Professor Toby Walsh of The University of New South Wales’s AI Institute contrasted Australia’s $29.9 million AI Safety Institute with the United Kingdom’s recent announcement of multi-billion-pound investments in public and private AI. Walsh questioned the regulatory reversal directly. “Why did the government decide to backtrack on new AI regulation that the Minister Husic had vocally supported?” he said. “There will be fresh harms that AI introduces, outside of existing regulation. If it is good enough for Europe, why is new AI regulation not needed here?”
Sue Keay, director of UNSW’s AI Institute, expressed frustration with the timing. After years of waiting for a national AI strategy, she said it’s “beyond frustrating” to discover the government is only now beginning to assess available compute infrastructure when other countries have been building capacity at breakneck speed since at least 2018. “While it’s nice to see all the right ingredients listed, once again, we’re stuck with a recipe that forgets the actual cooking,” Keay said, explaining that the plan “rightly lists everything we should be doing, but fails to commit to any real investment or any sense of urgency.”
Australia’s stance now sharply departs from regional peers. Singapore introduced the Model AI Governance Framework for Generative AI and the AI Verify Foundation to standardize testing against global norms. Japan advanced unified principles through its AI Guidelines for Business and is preparing a Basic Law for Promoting Responsible AI, targeting frontier models with safety and capability disclosures. South Korea enacted strict notification rules for high-risk, rights-impacting uses. The EU AI Act adopts a risk-based regime with mandatory conformity assessments for high-risk systems.
Australia’s voluntary framework places it at the permissive end of this regulatory spectrum. Greens senator David Shoebridge reportedly called the plan another toothless tiger and said it betrays Australians by abandoning guardrails under the guise of delay while choosing corporate profits over community rights.
