Moltbot (formerly Clawdbot) has rapidly become one of the most visible open-source AI projects of 2026, gaining more than 93,000 GitHub stars in just a few weeks. Created by Austrian engineer Peter Steinberger, the self-hosted AI assistant integrates with WhatsApp, Telegram, Slack, Discord, email and local resources, branding itself as a “personal AI running on your own hardware.” This explosive growth has also turned Moltbot into an attractive target for both security researchers and threat actors.
Moltbot as a High-Privilege AI Agent: Deep Access to User Data
Unlike classic chatbots that respond only when prompted, Moltbot operates as a proactive AI agent. It can automatically remind users about tasks, manage calendars, maintain long-term memory in Markdown and SQLite, and control a browser, email clients and local files in the background. In practice, users entrust it with sensitive work correspondence, personal conversations and credentials for multiple cloud services.
To support this level of autonomy, Moltbot typically needs API keys for commercial LLMs (such as Claude Opus 4.5), access tokens for messengers and email, OAuth tokens, third‑party service accounts and, in some setups, the ability to execute shell commands. This concentration of privileges creates a classic “one compromise = full access” scenario, comparable to compromising an administrator workstation or a password manager vault.
Malicious VS Code Extension Masquerading as Moltbot Tooling
Fake “ClawdBot Agent” and Abuse of Remote Access Software
Security company Aikido reported a malicious Visual Studio Code extension called “ClawdBot Agent — AI Coding Assistant” on the official Marketplace. The extension impersonated an official Moltbot/Clawdbot integration, even though the project does not provide a legitimate VS Code plugin.
Once installed, the extension executed at every IDE startup, fetched a remote config.json, and launched a binary named Code.exe. That binary silently deployed ScreenConnect, a legitimate remote access solution that was abused to establish persistent remote control over the developer’s machine. The malware also implemented fallback mechanisms, downloading DLLs from Dropbox or alternate domains if the primary command-and-control server was unavailable. Microsoft removed the extension, but the number of affected developers who trusted the Moltbot brand remains unknown.
Misconfigurations Exposing Moltbot Instances and Secrets
Unsafe Reverse Proxy Settings and Publicly Accessible Control Panels
Security researcher Jameson O’Reilly (Dvuln) identified hundreds of Moltbot instances exposed to the public internet without authentication. The root cause was a common configuration error: reverse proxies that implicitly trusted “local” traffic, mislabeling all external connections as internal and therefore trusted.
Through these open web panels, an attacker could view and exfiltrate API keys, OAuth tokens and chat histories, execute actions on behalf of the user and harvest credentials. In a manual review of several dozen instances, O’Reilly found at least eight with completely unprotected access, including a deployment with Signal integration that exposed full message access and active URI/QR codes for enrolling new devices.
MoltHub Skills Ecosystem and Supply Chain Attack Surface
O’Reilly also demonstrated a proof-of-concept supply chain attack targeting MoltHub, the skills (plugin) repository for Moltbot. By publishing a seemingly harmless module that only performed a “ping” and artificially inflating its download count beyond 4,000, he observed real-world adoption by developers in seven countries.
In a real attack, such a skill could covertly exfiltrate SSH keys, cloud credentials (for example, AWS access keys) or proprietary source code. This mirrors broader software supply chain incidents like the SolarWinds and event-stream compromises and underlines that plugin ecosystems for AI agents represent a critical attack vector.
Plaintext Storage, Infostealers and Turning an AI Agent into a Backdoor
Analysis by Hudson Rock indicates that Moltbot stores some secrets in plaintext on the local machine, including in Markdown and JSON files. On a host already infected by commodity infostealers such as RedLine, Lumma or Vidar, attackers can easily harvest these files and gain direct access to API keys, OAuth tokens and sensitive interaction histories.
Researchers also note that popular malware families are already adapting to Moltbot’s directory structure. With write permissions, an attacker can modify configuration files and effectively convert the AI assistant into a backdoor that implicitly trusts malicious sources, automatically exfiltrating data or executing arbitrary commands on behalf of the operator.
Brand Abuse, Rebranding to Moltbot and the $CLAWD Crypto Scam
Brand confusion further amplified the risk surface. Following a request from Anthropic, the project was rebranded from Clawdbot to Moltbot due to similarity with the “Claude” name. During this transition—when GitHub and X (Twitter) identities changed—cryptocurrency scammers briefly hijacked the old branding and aggressively promoted a fake token named $CLAWD.
The pseudo-token reportedly reached a market capitalization of about USD 16 million before collapsing to zero. Peter Steinberger has since publicly clarified that any cryptocurrency project using his name or the former Clawdbot branding is fraudulent, underscoring how quickly successful open-source AI brands can be weaponized in social engineering and investment scams.
AI Agents vs. Traditional Security Models: A Structural Mismatch
Experts from firms such as Salt Security and Intruder highlight a growing gap between user enthusiasm for AI agents and the maturity of security practices around them. Secure deployment of Moltbot requires solid understanding of API security, access control, network segmentation and least-privilege design, yet the default architecture prioritizes ease of deployment over “secure by default” principles.
Mandatory firewalls, strict credential validation, sandboxing for plugins and robust isolation are not enforced out of the box. Some researchers have gone so far as to describe Moltbot as an “infostealer disguised as an AI assistant” when deployed without strong hardening. O’Reilly emphasizes that AI agents, by design, cut across multiple protective layers—file system isolation, process sandboxing, permission models and firewalls—because they are explicitly granted broad, cross-domain access.
Security-conscious users attempt to mitigate these issues by running Moltbot on dedicated hardware such as inexpensive Mac minis acting as separate AI servers, with isolated email accounts and password manager identities as if onboarding a new employee. While this segmentation reduces blast radius, it does not eliminate the core issue: any high‑privilege AI agent becomes a concentrated point of risk.
The Moltbot case illustrates how quickly open-source AI agents evolve into prime targets for attackers—from malicious developer tools and misconfigured reverse proxies to plugin-based supply chain attacks and crypto fraud leveraging brand confusion. Organizations and individuals integrating such agents with sensitive personal or corporate data should treat them with the same rigor as administrator accounts or critical infrastructure: dedicated and isolated hosts, minimal privilege, hardened configurations, continuous monitoring, and regular security audits. Investing in a robust security architecture at the outset is almost always less costly than responding to an AI agent compromise after the fact.