Cybersecurity researchers have uncovered multiple critical vulnerabilities in Cursor AI, a widely-used artificial intelligence-powered code editor, that could allow attackers to execute arbitrary code on developer systems without user knowledge. These security flaws, centered around the Model Context Protocol (MCP), represent a new class of threats targeting AI-enhanced development tools and highlight significant risks in the modern software development ecosystem.
Model Context Protocol Creates New Attack Surfaces
The Model Context Protocol, introduced by Anthropic in November 2024, was designed as an open standard to connect AI systems with external data sources. While MCP streamlines integrations between AI tools and various data repositories, this simplified connectivity has inadvertently created additional attack vectors that malicious actors can exploit.
Security researchers from Check Point, Aim Labs, BackSlash, and HiddenLayer identified that MCP configurations can contain executable commands that run automatically when projects are opened in the editor. This functionality creates opportunities for threat actors to inject malicious code into developer workflows without detection, potentially compromising entire development environments.
MCPoison Attack Exploits Supply Chain Weaknesses
Check Point researchers discovered a critical remote code execution vulnerability designated CVE-2025-54136 with a CVSS score of 7.2. This flaw, dubbed “MCPoison,” exploits weaknesses in MCP configuration validation processes and poses significant risks to software supply chains.
The attack mechanism leverages a one-time approval system for MCP configurations. Once a user initially approves a configuration, Cursor stops requesting validation for subsequent modifications. Attackers can introduce seemingly benign MCP configurations to repositories, wait for approval, then stealthily replace the content with malicious code.
As proof-of-concept, researchers demonstrated how an approved command could be replaced with a reverse shell, providing persistent remote access to victim systems each time the project launches. This attack vector proves particularly dangerous in collaborative development environments where configuration changes occur frequently and may go unnoticed.
CurXecute Vulnerability Enables Prompt Injection Attacks
Aim Labs security specialists identified an even more severe vulnerability, CVE-2025-54135 with a CVSS score of 8.6, known as “CurXecute.” This critical flaw allows attackers to exploit indirect prompt injections to create and execute MCP files without requiring user confirmation.
The CurXecute attack operates by creating dotfiles (such as .cursor/mcp.json) through carefully crafted prompts. The vulnerability’s severity was amplified by the fact that proposed changes were written to disk and executed before users could approve or reject them, effectively bypassing security controls entirely.
Auto-Run Protection Bypass Techniques
A third vulnerability discovered by BackSlash and HiddenLayer teams targeted Cursor’s Auto-Run protection mechanism. Despite the availability of configuration options to specify commands requiring user confirmation, researchers found methods to circumvent these safeguards through prompt injection in git repository README file comments.
When developers cloned compromised repositories, Cursor would automatically read and execute malicious instructions embedded in the comments. The research teams identified at least four distinct methods to bypass the denylist and execute unauthorized commands on target systems.
Rapid Response and Security Updates
Cursor developers responded promptly to these security discoveries, releasing version 1.3 on July 29, 2025, with comprehensive security improvements. The update implements mandatory confirmation requirements for all MCP configuration changes, effectively neutralizing MCPoison-style attacks.
Additional security enhancements include strengthened validation for MCP file creation, restricted automatic command execution capabilities in Auto-Run mode, and improved user consent mechanisms for potentially dangerous operations. These changes represent a significant step forward in securing AI-powered development environments.
The vulnerabilities discovered in Cursor AI underscore the evolving security challenges within AI-enhanced development tools. While the Model Context Protocol offers valuable functionality for AI integrations, it requires careful implementation of trust models and validation mechanisms. Organizations using AI-powered development tools should prioritize regular security updates, implement strict code review processes for configuration changes, and maintain awareness of emerging threat vectors in AI-assisted development environments. As these tools become increasingly prevalent, the cybersecurity community continues monitoring and researching potential risks to ensure safe adoption of AI technologies in software development workflows.