Mass Exposure of OpenClaw AI Agents Puts 220,000 Systems at Risk

CyberSecureFox 🦊

SecurityScorecard has identified a critical exposure affecting the rapidly growing OpenClaw ecosystem: more than 220,000 OpenClaw instances are reachable directly from the public internet. Given the deep level of system integration typical for such AI agents, this exposure creates a significant new attack vector for both individual users and organizations.

What OpenClaw Is and Why It Became So Popular

OpenClaw is an open‑source, locally deployed AI assistant designed to integrate with popular messaging platforms including WhatsApp, Telegram, Slack, and Discord. The agent can run scheduled tasks, automatically process incoming requests, coordinate with other agents, and control various services on the host machine.

Since its launch in November 2025, OpenClaw has gained momentum at an exceptional pace: the GitHub repository has attracted nearly 200,000 stars, and a full ecosystem has emerged around it. The project’s creator, Peter Steinberger, developed OpenClaw in a highly accelerated manner, reportedly without a formal secure development lifecycle or comprehensive security testing.

Two related projects further expand the ecosystem: Moltbook, a “social network” where OpenClaw agents can post and interact, and ClawHub, a skills marketplace that adds new capabilities to the assistant through third‑party modules.

Security Weaknesses in the OpenClaw Ecosystem

Even before the latest SecurityScorecard report, OpenClaw had already drawn the attention of security researchers. In the official ClawHub skills repository, hundreds of potentially malicious or compromised skills were identified. Researchers demonstrated that such skills could coerce an AI agent into leaking API keys, payment card details, and users’ personal data.

In addition, at least three serious vulnerabilities have been reported in the OpenClaw core itself, related to code execution safety and access control. Combined with its rapid adoption, these flaws make the OpenClaw ecosystem an attractive target for attackers, similar to how unvetted browser extensions or mobile apps have historically been abused to harvest sensitive data.

SecurityScorecard Findings: Over 220,000 OpenClaw Instances Exposed

According to SecurityScorecard, tens of thousands of OpenClaw deployments are freely accessible from the global internet. Their live dashboard initially showed around 135,000 exposed instances; shortly after, the number surpassed 220,000 and continues to grow.

In practice, this means that anyone who discovers an exposed OpenClaw instance can attempt to interact with it directly. Because the agent often runs with elevated privileges and broad access to local resources, compromising a single instance can effectively compromise the entire environment it can “see”.

Root Cause: Insecure Default Binding and Weak Access Control

Default Network Settings as a Systemic Risk

The core issue behind this mass exposure lies in OpenClaw’s default configuration. By default, the service binds to 0.0.0.0:18789, meaning it listens on all available network interfaces, including any public‑facing ones. For such a powerful automation tool, a more secure default would be binding to 127.0.0.1 (localhost), which restricts access to the local machine only.

Combined with a streamlined deployment process, weak or absent authentication, and insufficient access control, this creates a large pool of unintentionally exposed AI services. Essentially, a tool designed as a personal assistant becomes a convenient remote entry point for attackers.

Risk Comparable to a Remote Logged‑In User

Researchers liken the situation to giving a remote stranger logged‑in access to a workstation to “help with tasks.” As long as the owner closely supervises the agent, it may be useful. Once that control weakens, the agent may start receiving and executing commands from any source, including hostile actors.

Because OpenClaw instances frequently access password managers, file systems, messaging apps, browsers, and caches containing personal or corporate data, a successful compromise may give an attacker near‑complete visibility into a user’s or organization’s digital environment and facilitate further compromise.

Enterprise Impact: From Personal Helper to Corporate Threat

SecurityScorecard notes that a significant subset of exposed OpenClaw instances is associated with corporate IP address ranges. This indicates that OpenClaw is being used not only by hobbyists but also in production or semi‑production environments—often without formal risk assessment or oversight by information security teams.

In such cases, a compromised AI agent can serve as a foothold for lateral movement across the network, theft of sensitive corporate data, deployment of ransomware, or targeted phishing operations impersonating an internal user. This pattern is consistent with broader industry reporting, such as the Verizon Data Breach Investigations Report, which has repeatedly highlighted misconfiguration and exposed services as leading causes of breaches.

Security Best Practices for OpenClaw and AI Agents

SecurityScorecard strongly recommends that all OpenClaw users immediately reconfigure the service to bind only to localhost, restrict external connectivity, and implement additional protective controls such as authentication, IP filtering, VPN access, and network segmentation.

However, the issue is not limited to configuration errors. By design, OpenClaw is built to change system settings and expose services externally. As a result, organizations should treat such AI agents as semi‑trusted infrastructure components, not harmless productivity tools. Recommended measures include:

  • Deploying OpenClaw on dedicated hosts or virtual machines with strict access controls and isolation.
  • Applying the principle of least privilege to limit the agent’s access to only what is strictly necessary.
  • Regularly auditing open ports and exposed services and monitoring for anomalous activity originating from AI agents.
  • Carefully reviewing, testing, and sandboxing skills from ClawHub and similar marketplaces before using them in any production or corporate environment.

The large‑scale exposure of OpenClaw demonstrates that AI agents are rapidly becoming a major attack surface. Developers must embed security by default into AI platforms, and organizations should assess AI assistants as full‑fledged infrastructure components with uncertain trust levels. Users and security teams alike should continuously revisit configurations, harden deployments, and ensure that a “smart assistant” does not silently turn into a powerful tool in the hands of an attacker.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.