Cybersecurity researchers have documented the first confirmed theft of OpenClaw AI agent configuration files, marking a significant milestone in the evolution of credential-stealing malware. The stolen data included API keys, authentication tokens and cryptographic keys – information that can effectively unlock the “internal world” of a user’s personal AI assistant and its integrated services.
What OpenClaw Is and Why AI Agent Frameworks Are High-Value Targets
OpenClaw (previously known as ClawdBot and MoltBot) is a locally installed framework for running AI agents. It stores persistent configuration and long‑term “memory” on the user’s device, has access to local files, and can integrate with email clients, messaging apps and external cloud services. This deep integration has driven its rapid adoption, with the project reportedly attracting hundreds of thousands of stars on GitHub and strong interest from the wider AI ecosystem.
From a cybersecurity standpoint, this architecture makes OpenClaw a high‑risk asset. The agent often holds access tokens, API keys and personal context in one place, and it can act autonomously across multiple services. Security vendors have long warned that such AI assistants are likely to become priority targets for infostealers – malware families designed to exfiltrate credentials, browser data, cookies and confidential files at scale.
Vidar Infostealer: How the OpenClaw Configuration Was Stolen
According to data breach intelligence company Hudson Rock, an OpenClaw configuration was exfiltrated from a victim’s workstation by the Vidar infostealer on 13 February 2026. This incident is the first publicly documented case where OpenClaw configuration files have been obtained as part of a mass infostealer campaign.
Notably, Vidar did not contain a dedicated module for OpenClaw. Instead, it followed its typical playbook, scanning the filesystem for documents whose names or contents included terms such as “token” or “private key”. The .openclaw directory matched these generic patterns and was automatically added to the exfiltration set.
Hudson Rock points out that this illustrates a broader trend: infostealers are evolving from simply harvesting browser passwords and cookies to compromising entire digital identities via personal AI agents. Industry reports from multiple vendors have already highlighted steady growth in Vidar, RedLine and similar malware families used in credential‑theft campaigns worldwide.
Stolen OpenClaw Files: Technical Impact and Abuse Scenarios
openclaw.json: Authentication Token and Base Identity Data
One of the compromised files, openclaw.json, contained an obfuscated user email address, the working directory path and a high‑entropy gateway authentication token. With such a token, an attacker could potentially connect to the victim’s local OpenClaw instance or impersonate a legitimate client when performing authenticated API calls, depending on the surrounding controls and network exposure.
device.json: Cryptographic Keys for the AI Agent Device
A second critical file, device.json, exposed fields such as publicKeyPem and privateKeyPem. This key pair is used to bind the device to the service and to sign messages. Possession of the private key enables an attacker to:
— cryptographically sign requests as if they originated from the victim’s device;
— bypass device‑integrity checks and “safe device” mechanisms that rely on key trust;
— decrypt logs or cloud data that are protected with the compromised key material.
Soul.md and Memory Files: Long-Term Behaviour and Personal Context
Additional files, including Soul.md, AGENTS.md and MEMORY.md, were also exfiltrated. These documents define the agent’s behaviour and store persistent context, such as interaction history, fragments of conversations, work notes, calendar events and other personal information. In effect, they form a structured psychological and operational profile of the user.
Combined, these assets are sufficient for full‑spectrum identity compromise: from impersonating the user in integrated services to crafting highly tailored phishing and social‑engineering attacks that exploit their habits, relationships and current projects.
Why Cyber Attacks on AI Assistants Will Escalate
OpenClaw is only one example of a rapidly growing class of local AI agent frameworks entering both enterprise and consumer workflows. As adoption increases, threat actors are expected to:
— add explicit detection signatures for popular AI agents into infostealer search patterns;
— systematically target configuration and memory directories (e.g. .openclaw and equivalents);
— leverage stolen tokens and keys for stealthy long‑term access, lateral movement, email compromise and intrusion into corporate chat, storage and ticketing systems.
For cybercriminals, an AI agent is a powerful data hub: it “sees” more than any single browser or messaging client and stores this information in a format that is easy to parse and analyse.
Security Best Practices for Protecting OpenClaw and Other AI Agents
To reduce the risk of configuration theft and AI agent compromise, organisations and individual users should implement a combination of technical and organisational controls:
— Strengthen endpoint protection: deploy and maintain up‑to‑date antivirus and EDR/XDR solutions, enforce application control, and patch operating systems and software regularly to block infostealers such as Vidar before they execute or exfiltrate data.
— Minimise and segment secrets: store only the tokens and keys that are strictly necessary, separate personal and business configurations, and consider running AI agents inside dedicated user profiles, containers or virtual machines.
— Encrypt and harden configuration storage: where possible, keep configuration files in encrypted containers, restrict filesystem permissions on .openclaw and similar directories, and avoid placing secrets in plain‑text documentation files.
— Rotate keys and tokens frequently: implement automated key and token rotation. If a configuration leak occurs, the shortened lifetime significantly limits an attacker’s window of opportunity.
— Control integrations and privileges: review which email, cloud and corporate systems the AI agent can access and enforce the principle of least privilege, granting only the minimum scopes and roles required.
The theft of OpenClaw configuration files underscores that AI assistants have already become primary, not peripheral, targets for cybercriminals. As intelligent agents become more deeply embedded in business processes and everyday life, their infrastructure, keys and long‑term memory must be protected with the same rigour as email, browsers and mobile devices. Organisations and users who proactively harden their AI environments, monitor for infostealer activity and treat agent configurations as sensitive assets will be far better positioned to withstand this emerging class of attacks.