Extension ecosystems for AI assistants are rapidly becoming a prime target for attackers. In the case of the open‑source AI agent OpenClaw (formerly Moltbot / ClawdBot), security researchers have identified a large wave of malicious skills, forcing the platform to tighten its security model and integrate automated scanning via VirusTotal for all skills uploaded to the official ClawHub repository.
Malicious OpenClaw skills: scale, tactics, and impact
According to Koi Security, in just a few days—from 27 January to 1 February—more than 230 malicious OpenClaw skills appeared across ClawHub and GitHub. These packages were disguised as helpful utilities but were primarily designed to steal cryptocurrency and other financial assets from users.
The independent group OpenSourceMalware published a technical analysis of this campaign, describing how attackers abused both user trust in the AI agent and its access to local tools and services. Once installed, a compromised skill could instruct OpenClaw to perform unauthorized actions, exfiltrate sensitive data, or silently interact with wallets and financial APIs.
Additional research from Bitdefender Labs paints an even more concerning picture: around 17% of all OpenClaw skills published and analyzed in February 2026 were classified as malicious. For any extension ecosystem, this is extremely high—comparable to what has historically been observed among unvetted browser extensions or pirated software, which have repeatedly been abused to deliver credential stealers and banking trojans.
These findings are consistent with broader trends. Browser extension stores, mobile app marketplaces, and code repositories such as npm and PyPI have all faced persistent waves of malicious or trojanized packages. AI assistant extensions—skills, tools, and plugins—offer attackers a similarly attractive path: they sit in a trusted channel and can often reach files, APIs, smart‑home devices, and corporate resources.
How VirusTotal integration secures the OpenClaw skills ecosystem
Recognizing that skills can grant OpenClaw control over smart‑home systems, finances, messengers, and business applications, the platform’s maintainers have introduced automated scanning through VirusTotal, including the Code Insight capability, as a mandatory step before publication to ClawHub.
Automated malware scanning and Code Insight analysis
The new security pipeline operates in several stages:
1. For every uploaded skill, OpenClaw calculates a unique SHA‑256 hash and checks it against VirusTotal’s database. If that hash is already known and flagged as malicious, the skill is blocked immediately and never appears in the catalog.
2. If no prior verdict exists, the skill’s package is submitted to VirusTotal Code Insight for deeper analysis. This module evaluates the logic and behavior of the code, looking for indicators of malware, hidden backdoors, suspicious network communications, and attempts to interact with sensitive resources.
3. Skills classified as “benign” are automatically approved for listing in ClawHub. Suspicious skills remain available but are accompanied by a clear warning so that users understand the risk. Any skill identified as malicious is blocked and not published to the directory.
In addition, all already‑published skills in ClawHub will be rescanned daily. This is critical, as attackers may update code over time, introduce delayed activation mechanisms, or attempt to bypass earlier detections with minor modifications.
Community reporting and crowdsourced moderation
Before the full VirusTotal integration, ClawHub had already implemented a temporary safeguard based on user reports. Authenticated users can flag skills as suspicious, with a limit of up to 20 active reports per account to reduce abuse.
Once a skill receives more than three unique reports, it is automatically hidden by default, reducing the likelihood that inexperienced users will install it by accident. This approach blends automated scanning with crowdsourced moderation, a model successfully used in many open‑source and app‑store ecosystems to surface emerging threats that static analysis might initially miss.
Residual risks: prompt injection, logic abuse, and enterprise exposure
The OpenClaw developers emphasize that VirusTotal integration is a significant milestone but not a complete solution. Some malicious skills are likely to rely on carefully crafted prompt injections and other logic‑level attacks against the AI model itself. These techniques manipulate the agent’s instructions and context rather than exploiting classic malware behavior, making them harder for traditional antivirus engines and static code analyzers to detect.
A particularly high‑risk scenario arises when OpenClaw is deployed on corporate endpoints without approval or oversight from IT and security teams. In such “shadow IT” cases, the AI agent may gain access to internal systems, confidential documents, customer data, and development tools. Skills installed from ClawHub or third‑party repositories then effectively become a new supply‑chain attack vector through a trusted automation component.
For organizations, this means AI assistant skills must be treated with the same rigor as executables, macros, and browser extensions. Best practices include source allowlisting, code review for critical skills, minimal‑privilege configurations, isolated execution environments, and centralized control over which AI tools may be installed and used on corporate assets.
OpenClaw’s security roadmap: threat modeling and transparency
Beyond the VirusTotal integration, the OpenClaw team has announced plans to strengthen proactive security across the platform. Upcoming steps include:
• Publishing a comprehensive threat model for OpenClaw and its skills ecosystem, clarifying attacker goals, capabilities, and likely attack paths.
• Releasing an open security roadmap with concrete priorities and timelines for new protections.
• Establishing a formal vulnerability disclosure program so researchers can report issues responsibly.
• Presenting the results of an independent security audit of the codebase, covering both the core agent and key ecosystem components.
Such transparency helps both skill developers and enterprise adopters align OpenClaw with their own risk management frameworks, design compensating controls, and make informed decisions about where and how the AI agent can safely operate.
The OpenClaw incident illustrates that AI agent ecosystems have already become viable targets for cybercriminals. Automated VirusTotal scanning, daily rescans, and community reporting are important advances, but users and organizations should not rely on them alone. Implementing strict extension policies, limiting AI agent access to critical resources, regularly reviewing installed skills, and following security advisories from platform maintainers are now essential practices. The sooner AI extensions are treated as high‑risk software components rather than harmless add‑ons, the lower the chances of data theft, financial loss, or infrastructure compromise driven by a seemingly helpful AI skill.