China’s National Computer Network Emergency Response Technical Team (CNCERT/CC) has published an official advisory warning that the OpenClaw agentic AI platform poses a “high security risk” in its default configuration. According to the bulletin, the way OpenClaw is typically deployed creates favorable conditions for large‑scale incidents, including credential theft and unauthorized access to critical corporate systems.
Why OpenClaw Is Classified as a High-Risk AI Agent
CNCERT’s analysis, released via its official WeChat account, stresses that the default OpenClaw configuration is “extremely weak” from a security perspective. As an agentic AI system, OpenClaw can be granted access to external web resources, local system tools, and, in some deployments, internal enterprise infrastructure. This combination makes it an attractive and powerful target for attackers.
A central concern is the risk of prompt injection attacks. In this scenario, an attacker embeds hidden instructions into websites, documents, APIs, or other data sources that the AI agent automatically consumes. When OpenClaw processes this poisoned content, it may be tricked into actions that directly contradict the user’s intent, such as exfiltrating sensitive data, modifying files, or executing commands inside corporate infrastructure.
CNCERT also highlights the danger of malicious or compromised skills and plugins that extend OpenClaw’s functionality. These components can introduce a classic supply chain risk into the AI ecosystem. If skills or plugins are created or modified by attackers, they can silently intercept data, harvest access tokens, or create backdoors, enabling multi‑stage breaches across otherwise well‑protected networks.
Known Vulnerabilities and the Human Factor in AI Agent Security
The advisory notes that several serious vulnerabilities have already been discovered in OpenClaw, enabling credential theft and privilege escalation under certain conditions. Such flaws become particularly dangerous when the agent is bound to high‑value accounts: cloud management consoles, DevOps pipelines, CI/CD systems, or IT infrastructure orchestration tools.
CNCERT further points to the role of human error when using agentic AI. Because OpenClaw can execute file operations, call APIs, or control services on behalf of the user, overly broad permissions combined with vague or incorrect user instructions can lead to unintended damage. Misconfigured policies may allow a single mistaken prompt to delete critical data, disrupt business processes, or halt key production systems.
Industry experience reinforces this risk. Studies such as IBM’s “Cost of a Data Breach Report 2023” estimate the average breach cost at over USD 4 million, with misconfiguration and excessive privileges among the recurring root causes. When an AI agent with automation capabilities is misused or compromised, the impact can mirror that of an insider with powerful remote administration tools.
Core CNCERT Recommendations for Securing OpenClaw Deployments
CNCERT recommends that organizations treat OpenClaw as a potentially untrusted automated agent that must never receive unrestricted access to enterprise infrastructure. Key technical and organizational measures include the following.
1. Container isolation and minimal privileges. OpenClaw should run in a hardened containerized environment (e.g., Docker, Podman) with the least privileges necessary and strict access control to host resources and internal services. This limits blast radius if the AI agent or its plugins are compromised.
2. Closing the management port to the public internet. The administrative interface must not be exposed directly to the internet. CNCERT advises restricting access through VPNs, IP allowlists, and additional authentication layers, aligning with standard best practices for remote administration endpoints.
3. Strong authentication and granular access control. Organizations should enforce multi‑factor authentication (MFA), role‑based access control (RBAC), and strict application of the principle of least privilege. Accounts and API keys used by OpenClaw must have tightly scoped permissions and must not be shared with other systems or users.
4. Disabling auto‑updates and tightly controlling plugins. Automatic updates and one‑click plugin installation can become stealthy vectors for malicious code. CNCERT recommends manual review and approval of all updates, only allowing vetted extensions from trusted and verified sources, and maintaining an inventory of all enabled skills and plugins.
Gartner’s Assessment, Rapid Adoption, and China’s Regulatory Response
The CNCERT warning echoes earlier findings from Gartner. The analyst firm previously classified OpenClaw as an “unacceptable cybersecurity risk” for enterprise environments, recommending that organizations deploy the tool only in strictly isolated environments, with disposable credentials and tightly limited integration to production systems.
Despite these concerns, OpenClaw has seen rapid adoption in China’s booming agentic AI market. Major cloud providers have promoted “one‑click deployment” options, lowering the barrier to experimentation. According to local media reports, a queue of around a thousand people formed outside Tencent’s Shenzhen headquarters earlier this year, where engineers offered free on‑site OpenClaw installation for interested users.
Restrictions in Government Agencies and State-Owned Banks
Following the CNCERT advisory, regulators and large organizations began taking concrete steps. Several government bodies and state‑owned banks reportedly banned the installation of OpenClaw on workstations. Employees in some entities were instructed to report any existing installations so that security teams could review configurations and remove the software if necessary.
In certain high‑security environments, restrictions have even extended to personal devices connected to corporate networks. This approach is consistent with mature cybersecurity practices, where any unapproved AI agent or automation tool is treated as a potential uncontrolled entry point into the organization’s infrastructure.
Agentic AI Systems as a New Class of High-Risk Assets
The situation around OpenClaw underscores a broader trend: agentic AI platforms are emerging as a new category of high‑risk digital assets. Functionally, they resemble powerful remote administration tools or RPA bots that act based on natural‑language instructions and external data, making misuse and manipulation both easier and harder to detect.
For organizations already experimenting with OpenClaw or similar AI agents, it is advisable to inventory all deployments, limit privileges, enforce containerization and network segmentation, and define formal policies for who can use these tools and for what purposes. Logging, monitoring, and incident response play a crucial role in detecting anomalous AI‑driven actions early.
Enterprises planning to adopt agentic AI should embed cyber risk assessment and secure architecture requirements into the earliest project phases. Treating AI agents as production‑grade software—subject to code review, threat modeling, and continuous hardening—allows organizations to harness automation benefits without turning cutting‑edge AI into the weakest link in their security posture.