Opinion: Australia's AI Regulation Is Moving Too Slowly and It's Starting to Cost Us


I’m going to be blunt. Australia’s approach to AI regulation is falling behind, and the cost of that delay is no longer theoretical.

The EU has its AI Act. The UK has its regulatory framework. Singapore has its Model Governance Framework. Canada is implementing its Artificial Intelligence and Data Act. And Australia? We’re still consulting.

The Consultation Loop

The Department of Industry, Science and Resources released its voluntary AI Ethics Principles in 2019. Six years later, we’re still operating primarily on voluntary principles.

There’s been a Safe and Responsible AI discussion paper. There’s been a consultation process. There’s been talk of mandatory guardrails for high-risk AI. But as of late 2025, Australian businesses building or deploying AI systems are operating in a regulatory grey zone.

That grey zone isn’t freedom. It’s uncertainty. And uncertainty is expensive.

What Uncertainty Actually Costs

I’ve spoken with four Australian AI startups in the past month that are delaying product launches because they don’t know what regulatory requirements they’ll face in six to twelve months. One is building a healthcare triage tool. Another is developing AI for financial advice. Both operate in areas where getting regulation wrong could mean existential penalties.

These aren’t companies avoiding regulation. They want clear rules. They’d happily comply with specific requirements. What they can’t do is build products against moving goalposts.

The enterprise side is equally frustrated. Large Australian banks and insurers want to deploy AI in customer-facing roles but are stuck in internal compliance reviews that can’t conclude because there’s no regulatory baseline to assess against. APRA has issued guidance, but guidance isn’t legislation, and boards of directors know the difference.

Meanwhile, their international competitors operating under clearer regulatory frameworks are deploying AI faster because they know exactly what the boundaries are.

The “Innovation Friendly” Argument Is Wearing Thin

The standard justification for Australia’s deliberate approach is that we don’t want to stifle innovation with premature regulation. I used to find that argument persuasive. I don’t anymore.

Here’s why: the absence of regulation isn’t the absence of constraint. It’s the presence of maximum uncertainty. Businesses are self-imposing constraints that are often stricter than any regulation would require, because they’re hedging against unknown future requirements.

Clear regulation, even strict regulation, actually enables faster adoption by removing uncertainty. The EU AI Act is demanding, but European companies now know exactly what’s required for high-risk AI systems. They can plan, budget, and build accordingly.

Australian companies are stuck guessing.

What We Actually Need

Three things would transform the situation.

A risk-based classification system. Not every AI application needs the same level of oversight. Customer service chatbots are different from medical diagnosis tools. Australia needs a clear taxonomy of AI risk levels with proportionate requirements for each. The EU model isn’t perfect, but it’s a reasonable starting point.

Mandatory transparency requirements for high-risk AI. If an AI system is making decisions that significantly affect people’s lives, finances, or legal rights, there should be disclosure obligations. People should know when AI is involved in decisions about their loan applications, insurance claims, or medical treatment.

A regulatory sandbox for AI startups. Let innovative companies test AI products under supervised conditions with temporary regulatory relief. This is how fintech regulation evolved in Australia, and it worked. Apply the same thinking to AI.

The Political Problem

The honest truth is that AI regulation is politically complicated. Regulate too aggressively and you’ll be accused of killing innovation. Regulate too lightly and you’ll be blamed when something goes wrong. That political calculus is why successive ministers have preferred consultation to legislation.

But consultation has a shelf life. We’ve been consulting for years. The major risks are well understood. The international models are available. The domestic industry is begging for clarity. At some point, continued consultation becomes avoidance.

Where to From Here

The government has signalled that mandatory guardrails for high-risk AI are coming. The question is whether “coming” means 2026 or 2028. Given that the next federal election could reshuffle ministerial portfolios and priorities, the window for action is narrower than people assume.

Australian businesses need regulatory clarity on AI. They needed it last year. Every month of delay costs innovation, competitiveness, and the trust of Australians who interact with AI systems that currently operate under no specific oversight.

It’s time to stop consulting and start legislating. The perfect regulatory framework doesn’t exist. But any clear framework is better than the ambiguous status quo.