Agentic AI
,
Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Open-Source Tool Security ‘Dumpster Fire,’ Experts Warn

An open-source artificial intelligence assistant that exploded in popularity over the past month is exposing users to data theft, malicious code and runaway costs.
See Also: Proof of Concept: Bot or Buyer? Identity Crisis in Retail
OpenClaw, previously known as Clawdbot and Moltbot, launched in November as a platform allowing users to create AI assistants that perform tasks like managing calendars, sending emails and booking flights by connecting to messaging apps like WhatsApp and iMessage. Rapid adoption has exposed vulnerabilities that security specialists say make the tool dangerous for enterprise and personal use. Over three days, the project issued three high-impact security advisories covering a one-click remote code execution vulnerability and two command injection vulnerabilities.
Users can add functions called “skills” that connect assistants with different services – and hackers have been quick to add malicious examples. Researchers at networking giant Cisco who built a tool to scan OpenClaw skills for security risks found a function that exfiltrated data by running a curl command to an external server without the user’s knowledge. The skill also used a direct prompt injection to bypass the assistant’s safety controls and execute the command without prompting the user.
Security firm Koi Security identified 341 malicious skills on ClawHub, a repository for OpenClaw extensions. Community-run threat database OpenSourceMalware spotted a skill that stole cryptocurrency. The vulnerabilities stem from OpenClaw’s design, which grants AI agents system access to execute shell commands, read and write files and run scripts on user machines.
The platform stores credentials in plaintext and ships without authentication enforced by default, said Gartner, which recommends businesses immediately block OpenClaw downloads and traffic. The analyst firm also recommends rotating any credentials OpenClaw has touched.
Laurie Voss, head of developer relations at Arize and the founding CTO of npm, called OpenClaw a security “dumpster fire.” OpenAI co-founder Andrej Karpathy, who initially promoted the project, later said he now does not recommend that people run OpenClaw on their computers.
Users are discovering unexpected costs alongside security risks. Benjamin De Kraker, an AI specialist who formerly worked on Grok, said OpenClaw burned through $20 worth of Anthropic API tokens overnight by checking the time inefficiently. The potential monthly cost to run reminders could reach $750, he said.
Chris Boyd, a software engineer, told Bloomberg he gave OpenClaw access to iMessage to create a daily news digest. The assistant went rogue, bombarding Boyd and his wife with more than 500 messages and spamming random contacts.
Major cloud providers have rushed to offer OpenClaw as a service despite the warnings. Tencent Cloud offered a one-click install tool last week, DigitalOcean, which delivered similar instructions a couple of days later, and Alibaba Cloud launched its offering in 19 regions starting at $4 per month.
The rollouts came even as China’s Ministry of Industry and Information Technology published a security alert warning that improper deployment of OpenClaw could expose systems to cyberattacks and data leaks, reported Reuters. The ministry said monitoring found that some OpenClaw deployments carry high security risks when left under default or poorly configured settings. The warning stops short of an outright ban but cautioned that organizations deploying OpenClaw should conduct audits of public network exposure, implement robust identity authentication and access controls.
The danger extends beyond individual vulnerabilities, with security experts warning that agents with broad access can be manipulated through prompt injection, where hidden or crafted instructions trick a model into taking actions a user did not intend, such as leaking data or posting content. The risk increases when an agent connects to email, chat, browsers and cloud dashboards.
