Claude Code Vulnerabilities Expose New AI Developer Tool Supply Chain Risks

CyberSecureFox 🦊

Security researchers at Check Point have identified three serious vulnerabilities in Anthropic’s Claude Code AI developer assistant. These flaws allowed attackers to execute arbitrary code on a developer’s machine and silently steal API keys simply by getting the victim to open a malicious repository in Claude Code—no manual script execution required.

AI assistant configuration as a software supply chain attack vector

Claude Code stores its project configuration in a dedicated file, .claude/settings.json, located inside the source repository. This design simplifies collaboration by sharing common settings—access to external tools, hooks, MCP servers, and other integration parameters—across a development team.

However, this same mechanism effectively introduces a new layer in the software supply chain. Any contributor with commit rights can covertly alter the configuration and embed malicious settings. As soon as other team members clone and open that repository in Claude Code, the altered configuration is automatically applied.

Repository-based settings expand the developer attack surface

Traditional threats for developers were primarily associated with running untrusted code, installing third-party dependencies, or importing compromised packages. With Claude Code and similar AI coding assistants, the mere act of opening a project can become dangerous.

The assistant assumes that configuration stored alongside the code is trustworthy. Check Point’s proof‑of‑concept attacks show that this trust model can be abused to trigger code execution, manipulate external integrations, and exfiltrate secrets without explicit user interaction. This trend mirrors broader software supply chain incidents seen in recent years, such as compromised build systems and malicious library updates.

Vulnerability 1: arbitrary code execution via hooks (CVSS 8.7)

The first vulnerability (no public CVE, CVSS 8.7) involved the hooks mechanism—custom shell commands that Claude Code can run automatically at different stages of working with a project. These hooks are defined in the project configuration file.

Check Point found that Claude Code executed these hooks without asking the user for confirmation. That meant a malicious repository only had to define a harmful hook; once opened in Claude Code, arbitrary commands would run on the developer’s machine.

In the demonstration, the researchers launched a benign external application. In a real attack, the hook could start a reverse shell, enabling remote access, install backdoors, or tamper with local source code and credentials. Anthropic addressed this issue in Claude Code version 1.0.87 (August 2025), tightening validation and user prompts around hook execution.

Vulnerability 2: bypassing MCP server trust dialogs (CVE-2025-59536, CVSS 8.7)

The second flaw, CVE-2025-59536 with a CVSS score of 8.7, targeted Claude Code’s integration with Model Context Protocol (MCP) servers. MCP servers allow the assistant to interact with external tools and services, often with broad access to project data and the local environment.

After the first patch, Check Point discovered that two configuration options could be abused to automatically approve any MCP server, effectively bypassing the interactive trust dialogs intended to protect users from untrusted integrations.

Critically, a malicious MCP server could be activated as soon as Claude Code started, before the user saw any project trust prompt. This opened the door to silent command execution, data harvesting from the workspace, or exfiltration of source code to attacker‑controlled infrastructure. Anthropic fixed this vulnerability in version 1.0.111, released in September 2025.

Vulnerability 3: API key theft via ANTHROPIC_BASE_URL override (CVE-2026-21852)

The third issue, CVE-2026-21852 with a CVSS score of 5.3, centered on the environment variable ANTHROPIC_BASE_URL. This parameter defines the API endpoint used by Claude Code to communicate with Anthropic’s services.

Researchers demonstrated that this endpoint could be overridden through the project configuration, transparently routing all assistant traffic through an attacker‑controlled proxy server. Traffic inspection showed that Claude Code transmitted the API key in plaintext with every request.

Moreover, according to Anthropic, these requests were issued before any trust dialog was presented to the user, making API key leakage fully automatic when a malicious project was opened. A compromised key granted access to all files within the Claude Code workspace: the attacker could upload, delete, or modify files via API.

This vulnerability was remediated in version 2.0.65 (January 2026), which changed how endpoints and secrets are handled and strengthened controls around outbound requests.

How Claude Code vulnerabilities reshape the AI development threat model

Together, these vulnerabilities underline a structural shift: AI coding assistants are becoming independent attack surfaces. Unlike classic IDEs, they typically hold simultaneous access to source code, local files, network resources, and sensitive secrets such as API keys and tokens.

Industry guidance on secure software development, including frameworks such as NIST’s Secure Software Development Framework (SSDF), already emphasizes supply chain and toolchain hardening. AI assistants now clearly belong in the category of privileged development tools that must be governed with the same rigor as CI/CD pipelines, artifact repositories, and build servers.

Practical security recommendations for development and security teams

Based on the disclosed Claude Code vulnerabilities and broader supply chain experience, organizations should treat AI coding assistants as high‑value assets and implement additional controls:

First, rigorously review configuration files like .claude/settings.json before use, especially in third‑party, public, or forked repositories.
Second, limit the privileges of API keys, apply the principle of least privilege, and use separate, rate‑limited, easily revocable keys for development environments.
Third, run AI assistants in isolated environments—such as sandboxes, separate user profiles, or containers—to minimize impact if compromise occurs.
Fourth, keep Claude Code fully updated (at least version 2.0.65 or later) and monitor the vendor’s security advisories for new patches and hardening guidance.
Fifth, train developers to recognize the risks of opening untrusted projects and to scrutinize automatic configuration, hooks, and external integrations before granting trust.

The Claude Code case illustrates that as AI systems become deeply embedded in software development workflows, configuration files, trust mechanisms, and API secrets are now as critical as the source code itself. Organizations that proactively update their threat models, extend security controls to AI tools, and build security awareness into developer culture will be better positioned to withstand the next generation of supply chain attacks targeting the development process.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.