A critical vulnerability CVE-2026-25874 has been identified in the open-source robotics platform LeRobot by Hugging Face, enabling unauthenticated remote code execution (RCE) on both server and client systems. The flaw, rated CVSS 9.3, affects a project with nearly 24,000 GitHub stars widely used for AI research, simulation, and robotics prototyping, significantly amplifying its impact on the AI and robotics ecosystem.
Unsafe deserialization in LeRobot’s asynchronous inference pipeline
The root cause of CVE-2026-25874 is unsafe deserialization of untrusted data within LeRobot’s asynchronous inference pipeline. The platform’s policy server and robot client components use Python’s pickle.loads() function to deserialize messages received over gRPC channels that lack authentication and TLS encryption. In practice, this means the service blindly trusts binary data coming from the network as if it were safe and controlled.
According to the official security advisory, an unauthenticated remote attacker with network access to the relevant gRPC interfaces can send a specially crafted pickle payload via the SendPolicyInstructions, SendObservations, or GetActions calls. Once deserialized, this payload can trigger arbitrary Python code execution within the context of the vulnerable process.
Why pickle is dangerous with untrusted input
Python’s pickle module is known to be inherently unsafe for untrusted data. As OWASP and multiple language security guides emphasize, insecure deserialization is a high‑impact vulnerability class because many serialization formats, including pickle, can execute code during object reconstruction. Pickle was designed for trusted, local object persistence—not for processing data from users, networks, or external clients.
In LeRobot’s case, the PolicyServer component accepts serialized data from external clients and feeds it directly to pickle.loads(). An attacker can craft a pickle object that executes system commands on deserialization, leading to full compromise of the host running LeRobot. Since these hosts often include GPUs or TPUs and reside in sensitive network segments, successful exploitation can provide an attacker with high‑value access.
Security impact on AI inference and robotics environments
Resecurity analysts highlight that AI inference services and robotics control systems are often deployed with elevated privileges and placed in trusted infrastructure zones. Exploiting CVE-2026-25874 can therefore give an attacker more than just access to a single container or process.
Once the vulnerability is exploited, an attacker may gain:
- Complete control over the inference node, including OS‑level command execution.
- Pivot access into internal services and network segments reachable from the compromised host.
- Theft, tampering, or poisoning of training and production datasets, with direct impact on model integrity.
- Abuse of expensive compute resources (GPU/CPU) for cryptomining, password cracking, or further attacks.
- Manipulation of robot behavior by altering control policies, observations, or actions delivered to robots.
In robotics, the risk is not limited to data or infrastructure. Compromised robots in industrial, logistics, or medical environments introduce the possibility of physical safety incidents if an attacker can influence movement, force, or task execution.
Discovery timeline and LeRobot project response
The vulnerability was discovered by security researcher Valentin Lobstein of VulnCheck, who published a technical analysis and confirmed exploitation against LeRobot 0.4.3. At the time of reporting, a fix had not yet been released; maintainers indicated that remediation is planned for the 0.6.0 branch, requiring substantial refactoring of the affected components.
A similar defect was independently reported earlier, in December 2025, by a researcher using the pseudonym “chenpinji”. The LeRobot team acknowledged the severity in January, noting that the responsible code was originally experimental and would need to be almost completely redesigned to meet production‑grade security requirements.
LeRobot’s technical lead, Steven Palma, emphasized that the framework has historically been treated as a research and prototyping tool, not as a hardened production platform. As adoption in real‑world deployments grows, the project plans to increase its focus on secure development practices and leverage the open‑source community to identify and remediate vulnerabilities more quickly.
Pickle under scrutiny and the Safetensors irony
CVE-2026-25874 reinforces a long‑standing lesson in application security: pickle must never be used with untrusted or network-supplied data. Any file, message, or stream that is deserialized without strict validation and isolation can become a direct RCE vector.
The case is particularly notable because Hugging Face previously introduced Safetensors, a format explicitly designed as a safe alternative to pickle for storing machine learning models and tensors. Yet in the LeRobot robotics framework, network-facing code paths still rely on pickle.loads(), reportedly even suppressing warnings from static analysis tools in the process.
Mitigation guidance for AI and robotics deployments
For developers and organizations deploying LeRobot or similar AI/robotics frameworks, several security practices are essential:
- Avoid pickle for untrusted input. Use formats that do not support arbitrary code execution, such as JSON, Protocol Buffers, CBOR, or Safetensors.
- Enforce authentication and encryption (TLS or mTLS) on gRPC and other control channels to prevent unauthenticated access and eavesdropping.
- Restrict network exposure of PolicyServer and inference endpoints using firewalls, service meshes, or private networks.
- Apply least privilege to service accounts, containers, and hosts so that a compromise has minimal blast radius.
- Monitor for anomalous activity, including unusual outbound connections, elevated resource usage, and unexpected process execution on inference nodes.
As AI and robotics move from labs into production, security must be treated as a first‑class requirement, not an afterthought. Organizations relying on LeRobot should closely track the upcoming 0.6.0 fixes, limit exposure of vulnerable components, and review their entire AI stack for unsafe deserialization and weak network controls. Building secure AI‑driven robotics now will significantly reduce the likelihood that the next critical CVE turns into a real‑world security or safety incident.