Flowise Vulnerability CVE-2025-59528: Critical RCE Threat to AI Infrastructure

CyberSecureFox

The open‑source AI orchestration platform Flowise has been hit by a critical security vulnerability, tracked as CVE-2025-59528 with the maximum CVSS score of 10.0. According to researchers at VulnCheck, the flaw is already being actively exploited, allowing attackers to execute arbitrary code on Flowise servers and potentially compromise connected corporate AI workflows and data stores.

How CVE-2025-59528 Works: Code Injection in Flowise CustomMCP

The vulnerability resides in the CustomMCP component of Flowise, which is used to connect the platform to external Model Context Protocol (MCP) servers. Administrators configure this node using a parameter named mcpServerConfig, which defines how the MCP server should be set up.

The core issue is that Flowise processes this configuration string in a way that evaluates embedded JavaScript code without robust security validation. In practice, this introduces a classic code injection scenario: malicious JavaScript supplied in the configuration can be executed directly within the Node.js runtime that powers Flowise.

Once exploited, the injected code can access sensitive Node.js modules such as child_process (for running operating system commands) and fs (for file system operations). Because it runs with the privileges of the Flowise process, a successful attack can enable an adversary to:

Achieve full remote code execution (RCE) on the underlying host;
Read, modify, or delete files on the server;
Extract credentials, API keys, configuration files, and tokens;
Use the compromised Flowise instance as a pivot point to move laterally across the organization’s network.

Why CVE-2025-59528 Is Especially Dangerous for Businesses

Exploitation of CVE-2025-59528 typically requires access to a valid Flowise API token. In many organizations, these tokens are embedded in automation scripts, CI/CD pipelines, shared developer tools, or demo environments that may not be governed by strict access controls. If an attacker obtains such a token, the barrier to exploiting the vulnerability becomes very low.

The potential impact extends beyond the Flowise platform itself. AI pipelines often process high‑value, sensitive data, including customer queries, internal documentation, proprietary source code, and regulated personal data. Compromise of a Flowise node can therefore lead to:

Exposure of trade secrets and intellectual property;
Regulatory violations in areas such as data protection and privacy;
Manipulation or sabotage of automated decision‑making processes.

From an operational risk perspective, a critical vulnerability in a widely deployed AI platform can trigger outages, disrupt business‑critical automations, and generate direct financial losses. As VulnCheck’s head of security research, Caitlin Condon, has emphasized, this is a severe flaw in a widely used platform that has been publicly known for more than six months, meaning organizations have had ample time to patch—but many have not yet done so.

Active Exploitation and Other Flowise Security Issues

VulnCheck has observed real‑world exploitation of CVE-2025-59528 originating from at least one IP address associated with the Starlink network. Although initial activity appears limited, the exposure surface is significant: internet scans show more than 12,000 publicly accessible Flowise instances. Any automated attack campaign targeting these systems can rapidly identify and compromise unpatched deployments.

Previously Exploited Flowise Vulnerabilities

CVE-2025-59528 is the third Flowise vulnerability known to be exploited in the wild. Earlier security issues include:

CVE-2025-8943 (CVSS 9.8) – an OS command injection flaw that allowed arbitrary operating system commands to be executed;
CVE-2025-26319 (CVSS 8.9) – an arbitrary file upload vulnerability that could be abused to upload and run malicious components.

Together, these vulnerabilities highlight a broader trend: AI platforms are rapidly becoming high‑value targets. They are often deployed quickly—sometimes as experimental or pilot systems—and may not undergo the same rigorous security testing and hardening as traditional business applications.

Mitigation: How to Secure Flowise Against CVE-2025-59528

The Flowise team has released a fix for the vulnerability in the Flowise npm package version 3.0.6, crediting security researcher Kim SooHyun for responsible disclosure. Organizations running Flowise should take the following steps without delay:

1. Update Flowise immediately. Verify the currently installed version and upgrade to at least 3.0.6, or a newer stable release if available from the official project repository.

2. Rotate and restrict API tokens. After patching, revoke existing Flowise API tokens, issue new ones, and align their scope with the principle of least privilege. Ensure tokens are not stored in openly shared scripts or repositories.

3. Review logs and investigate anomalies. Examine Flowise access logs for suspicious activity, with particular attention to interactions involving the CustomMCP node and any unusual command execution patterns that may indicate prior compromise.

4. Isolate AI infrastructure. Avoid exposing Flowise directly to the public internet. Place it behind a reverse proxy, WAF, or VPN, and restrict network access only to systems that explicitly require it. Consider segmenting AI workloads in dedicated network zones.

5. Integrate AI platforms into vulnerability management. Treat Flowise and other AI components as first‑class assets in vulnerability scanning, patch management, and security monitoring programs, rather than experimental tools outside normal governance.

The case of CVE-2025-59528 in Flowise illustrates that AI platforms must be protected with the same rigor as core business systems. As organizations deepen their reliance on AI for automation, analytics, and customer interaction, the security of orchestration layers like Flowise becomes directly tied to business resilience. Prioritizing timely patching, strict API access control, network isolation, and continuous security review of AI architectures is essential to prevent today’s experimental tools from becoming tomorrow’s weakest link.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.