OpenClaw's Explosive Growth Brings Security Nightmare for Australian Enterprises
OpenClaw has become the poster child for AI agent platforms, and Australian enterprises are deploying it at a rate that should worry every CIO in the country.
The numbers are impressive. Over 192,000 GitHub stars make it one of the most popular open-source projects in the AI space. The ClawHub marketplace offers 3,984 pre-built skills that let businesses deploy AI agents across Slack, Microsoft Teams, WhatsApp, Telegram, and Discord. Australian companies in finance, retail, and manufacturing are using it to automate customer service, internal workflows, and data analysis.
But here’s what the growth articles aren’t telling you: OpenClaw’s security posture is a disaster, and most Australian businesses deploying it have no idea what risks they’re taking.
The Security Research That Should Scare You
Recent independent security research into the ClawHub marketplace revealed problems that go beyond normal open-source vulnerabilities.
36.82% of available skills contain security flaws. That’s more than one in three. The issues range from insufficient input validation to direct exposure of API credentials. Some of these flaws are accidental. Many aren’t.
341 skills have been confirmed as actively malicious, and researchers traced them to a single coordinated campaign. These skills were designed to exfiltrate data, establish persistence in enterprise environments, and create backdoors. They weren’t discovered because someone found suspicious code. They were discovered because someone did systematic analysis across the entire marketplace.
How many Australian companies deployed those skills before they were identified? We don’t know. ClawHub doesn’t have mandatory telemetry, and most companies aren’t monitoring which skills their teams install.
Default Configs Are an Invitation to Attack
OpenClaw’s default configuration is internet-facing. That means out-of-the-box, your OpenClaw instance is accessible from anywhere unless you explicitly lock it down.
Current scans show over 30,000 OpenClaw instances exposed on the public internet. A significant portion of those are in Australia, based on IP geolocation data. Many are running in AWS Sydney or Google Cloud Australia, deployed by teams that assumed the platform had enterprise-grade security by default.
It doesn’t. OpenClaw was built for speed and flexibility, not for hardened deployment. That’s not a criticism of the project. It’s how most open-source tools are built. But it means the responsibility for securing it falls entirely on the organisation deploying it, and many Australian organisations aren’t equipped for that.
Why Australian Companies Keep Getting This Wrong
The pattern is consistent. A development team or business unit discovers OpenClaw, loves the capabilities, spins up an instance, and starts building agents. The security review happens later, if it happens at all.
Part of the problem is that AI agent platforms are new enough that security teams don’t have established playbooks. Traditional application security practices don’t fully translate. An AI agent that can query your internal databases, send messages on behalf of employees, and access third-party APIs is a fundamentally different risk profile than a standard SaaS tool.
The other problem is that skills from ClawHub look legitimate. They have descriptions, ratings, and download counts. There’s no obvious visual indicator that a skill is malicious or poorly secured. The assumption is that the marketplace vets submissions, but it doesn’t. Anyone can publish.
What Australian Companies Actually Need
I’ve spoken to an AI consultancy about how they’re handling OpenClaw deployments, and the consensus is clear: you can’t just deploy the open-source version directly and assume it’s production-ready.
At minimum, you need:
Proper infrastructure isolation. Your OpenClaw instance should not be internet-facing unless there’s a specific, documented reason. It should sit behind a VPN or within a private network segment.
Skill vetting. Every skill from ClawHub needs to be reviewed before deployment. That means code review, not just checking ratings. If your team doesn’t have the capacity to review Python code, you shouldn’t be installing third-party skills.
Monitoring and logging. You need to know which skills are running, what data they’re accessing, and what external calls they’re making. That requires instrumentation beyond what OpenClaw provides by default.
Australian data residency. If you’re deploying OpenClaw for customer-facing use cases, you need to ensure data stays in Australia. The default OpenClaw setup doesn’t enforce geographic constraints.
Some Australian firms are building internal capabilities to handle this. Others are using managed services from Team400.ai that provide pre-vetted skills, security hardening, and Australian hosting. Either approach works, but the critical point is that neither is optional.
The Regulatory Attention That’s Coming
ASIC and the ACCC have both flagged AI systems as priority areas for 2026. When an Australian financial services company has a data breach caused by a malicious OpenClaw skill, regulators will ask what due diligence was performed before deployment.
“We assumed the marketplace vetted skills” isn’t going to be an acceptable answer. Under the Privacy Act and Australian Consumer Law, businesses are responsible for third-party code they deploy that processes customer data. That includes AI agent platforms and their skill ecosystems.
Companies that can’t demonstrate proper vetting processes, security controls, and incident response capabilities are going to face serious scrutiny. The regulatory precedent from data breach cases is clear: negligence in deployment carries consequences.
What We Should Learn From This
OpenClaw is a powerful platform, and Australian companies are right to explore it. But power without security is liability, and the current deployment patterns suggest too many organisations are prioritising speed over safety.
If your company is using OpenClaw or considering it, do the work to deploy it properly. If you don’t have the internal capability, bring in external help. The cost of doing it right is significantly less than the cost of a breach that exposes customer data or shuts down operations.
And if you’re a security vendor or consultant working with Australian businesses, now is the time to develop OpenClaw expertise. Demand is going to spike as companies realise they need help, and the firms with established capabilities will have a significant advantage.
The platform isn’t going away. The risks aren’t going away either. How Australian enterprises handle the gap between those two facts will determine who succeeds and who ends up in a case study about what not to do.