Skip to content
FAQ

Critical Unpatched RCE Flaw in Hugging Face LeRobot Threatens Physical AI Systems Worldwide

Researchers have disclosed CVE-2026-25874, a CVSS 9.8 critical vulnerability in Hugging Face's LeRobot open-source robotics platform. The flaw enables unauthenticated remote code execution through unsafe Python pickle deserialization over exposed gRPC endpoints — and remains unpatched across the project's 21,500+ GitHub star user base.

5 min read

A critical security vulnerability in Hugging Face’s open-source robotics platform LeRobot has been publicly disclosed, carrying a CVSS severity score of 9.8 out of 10 — and it remains unpatched. The flaw, tracked as CVE-2026-25874, allows an unauthenticated attacker to execute arbitrary commands on any vulnerable LeRobot deployment by exploiting unsafe Python pickle deserialization across exposed gRPC network endpoints.

The disclosure arrives at a fraught moment for the physical AI ecosystem. LeRobot has become the dominant open-source framework for training and deploying robotic manipulation policies, with more than 21,500 GitHub stars and active deployment in research institutions, university robotics labs, and an increasing number of early-stage production environments. The vulnerability’s combination of maximum network exploitability — no credentials required, no prior access needed — and the nature of the systems it exposes (GPU clusters, robotics hardware, internal research networks) makes it among the most consequential security disclosures in the AI infrastructure space to date.

How the Attack Works

The vulnerability lives in LeRobot’s async inference architecture, specifically in the PolicyServer component, which offloads compute-intensive policy inference to a GPU-backed server via gRPC remote procedure calls.

The core problem is a single line: the PolicyServer uses Python’s built-in pickle.loads() function to deserialize incoming data from all RPC endpoints. Pickle deserialization is a well-documented class of dangerous operations — pickle can execute arbitrary Python code embedded in a crafted payload during the deserialization process itself, before any application-level validation runs. An attacker who can reach the gRPC endpoint only needs to send a maliciously crafted pickle payload and the server will execute it.

Compounding the problem significantly: the gRPC service is configured with add_insecure_port(), meaning all communications occur without Transport Layer Security (TLS) encryption and without any authentication controls whatsoever. There are no tokens to steal, no credentials to brute-force — the endpoint is simply open to anyone on the network.

In research and university lab environments, where LeRobot servers are commonly configured to be accessible across an internal network to allow multiple researchers to share GPU inference capacity, “anyone on the network” can be a very large attack surface.

What Attackers Can Reach

The severity of this vulnerability is amplified by what LeRobot servers typically have access to.

Because LeRobot is designed for GPU-backed inference, servers running the platform frequently operate with elevated system privileges — sometimes root — to manage CUDA drivers and GPU hardware. A successful RCE exploit therefore frequently yields a shell with elevated permissions on the underlying host.

From there, an attacker gains access to:

  • Robotics hardware interfaces: Physical robots connected to the server can be issued commands. In a research setting, this could mean causing a robotic arm to move in dangerous, uncontrolled ways while researchers or staff are nearby.
  • Training datasets and model weights: LeRobot research environments typically store large proprietary datasets representing months or years of data collection, along with trained policy weights. Both are high-value targets for IP theft or destruction.
  • Internal network access: An RCE foothold on a GPU server that sits inside a research network’s trusted perimeter allows lateral movement to other systems on the same network segment — workstations, file servers, data repositories.
  • Cloud compute credentials: Many LeRobot deployments run on cloud-provisioned GPU instances, where instance metadata services expose cloud provider credentials. A compromised server is frequently a stepping stone to the wider cloud account.

The Disclosure Timeline and Patch Status

Security researchers submitted the vulnerability report to Hugging Face through responsible disclosure channels. As of the date of this publication, no patch has been released for LeRobot. The Hugging Face team has acknowledged the report, but the fix requires architectural changes to the inference pipeline — not a simple single-line patch — because the root issue involves replacing pickle-based serialization throughout the PolicyServer with a safer alternative.

Researchers and security teams have confirmed that CVE-2026-25874 affects all versions of LeRobot up to and including the most recent release.

Immediate Mitigations

While waiting for an official patch, organizations running LeRobot in any networked environment should take the following steps immediately:

Network isolation. The most impactful immediate measure: ensure the gRPC PolicyServer endpoint is not accessible from untrusted network segments. If LeRobot inference servers must be shared across a team, use a VPN or network firewall rules to restrict access to specific, known IP addresses. If the server is public-facing in any way, take it offline until the patch is available.

Replace pickle serialization. Developers who can modify their LeRobot deployment should replace pickle.loads() in the PolicyServer with safer serialization alternatives. The researchers who disclosed CVE-2026-25874 recommend JSON serialization for simple data types, native protobuf field encoding for structured RPC data, or Hugging Face’s own safetensors format for model weight transfer. None of these alternatives allow arbitrary code execution during deserialization.

Enable TLS and authentication. Switch from add_insecure_port() to add_secure_port() with a valid TLS certificate, and implement gRPC interceptors that enforce token-based access control on all incoming requests. These changes prevent network-level interception and require attackers to obtain valid credentials before reaching the deserializer — raising the bar significantly even before the pickle issue is resolved.

Audit access logs. Review any existing gRPC server access logs for unusual connection patterns. The pickle attack leaves traces in server logs if logging is enabled; unusual sources or malformed payloads that triggered error responses may indicate prior exploitation attempts.

The Broader Pattern

CVE-2026-25874 is not an isolated incident. It reflects a structural tension that has followed the rapid growth of open-source AI tooling: the teams building these frameworks are typically research engineers optimizing for capability, flexibility, and developer experience, not security engineers designing for adversarial network environments.

LeRobot was created to democratize robotic learning research — to give university labs and independent researchers access to the same kind of policy training pipelines that large, well-resourced robotics companies build internally. That mission has been extraordinarily successful, as the 21,500 GitHub stars attest. But the framework was not designed with the assumption that it would be running on network-exposed servers in environments where adversaries might be present.

As robotics AI moves from research settings into industrial and commercial deployments — where these same frameworks are increasingly used as starting points — the security posture of the foundational open-source tooling becomes critical infrastructure. The robotics security community has been warning for several years that the path from “open-source training framework” to “factory floor deployment” is narrowing, and that the security practices appropriate for an academic lab are not adequate for industrial environments.

CVE-2026-25874 is a proof of concept for that warning. Hugging Face has built a remarkable platform that has accelerated robotics research globally. The patch cannot come fast enough.

Hugging Face LeRobot CVE-2026-25874 security vulnerability RCE robotics Python open source
Share

Related Stories

Google DeepMind's Gemini Robotics-ER 1.6 Brings AI Reasoning to Boston Dynamics' Spot

Google DeepMind launched Gemini Robotics-ER 1.6 on April 15, a reasoning-first model that dramatically enhances robots' ability to understand spatial relationships, read complex gauges, and autonomously detect hazards. Boston Dynamics is immediately integrating it into Spot's industrial inspection platform, marking one of the most consequential physical AI deployments to date.

5 min read

Vibe Coding Is Flooding the App Store: New App Releases Up 104% in April 2026

AI coding tools like Claude Code, Cursor, and Replit have triggered a historic surge in mobile app submissions: worldwide releases are up 60% year-over-year in Q1 2026 and 104% in April. The vibe-coding wave is democratizing app development but also straining Apple's review system and raising new questions about quality and discoverability.

5 min read