Security researchers have disclosed three vulnerabilities in the popular AI development frameworks LangChain and LangGraph that can lead to leakage of filesystem data, environment secrets and user conversation history. These weaknesses affect enterprise deployments of large language model (LLM) applications and demonstrate that AI infrastructure is subject to the same classic security flaws as any other software stack.
LangChain, LangGraph and Their Role in LLM Application Security
LangChain and LangGraph are widely used open-source frameworks for building LLM-powered applications. LangChain provides the core primitives for integrating language models with external data sources, tools and APIs, while LangGraph builds on top of LangChain to orchestrate more complex, branching and agentic workflows.
According to Python Package Index (PyPI) download statistics cited by researchers, LangChain was downloaded more than 52 million times in a single week, LangChain-Core over 23 million times, and LangGraph around 9 million times. This scale of adoption means that any vulnerability in these core components can propagate across thousands of applications and integrations worldwide, turning a single bug into a broad supply-chain risk.
Three Critical LangChain and LangGraph Vulnerabilities
Cyera security researcher Vladimir Tokarev reports that each of the three vulnerabilities exposes a different class of sensitive corporate data: filesystem contents, environment secrets and conversation history. In practice, this creates three independent paths for an attacker to exfiltrate information from any environment where LangChain-based solutions are deployed.
Arbitrary Filesystem Access and Container Configuration Exposure
The first issue allows an attacker to read arbitrary files on the host system, including Docker configuration files and other sensitive operational artifacts. In modern cloud-native deployments, LLM applications are frequently run in containers or Kubernetes clusters, where configuration files often contain details about images, volumes, credentials or cluster endpoints.
Access to these files significantly simplifies follow-on attacks such as credential theft, privilege escalation and lateral movement across the cluster. This pattern mirrors long-known risks in traditional web applications where directory traversal or insecure file handling vulnerabilities expose configuration data that accelerates compromise.
Prompt Injection and Exfiltration of Secrets
The second vulnerability centers on prompt injection—the insertion of malicious instructions into user inputs or external data processed by an LLM. If LangChain chains and tools are misconfigured, an attacker can coax the model into revealing environment variables, API keys and other secrets that the application has access to through its connectors.
This risk is particularly acute in enterprise scenarios where LLM agents are wired into cloud providers, databases and internal services. As seen in prior high-profile incidents with misconfigured cloud keys, leaking a single long-lived credential can enable data theft, unauthorized infrastructure access or abuse of paid APIs. Prompt injection is effectively a new attack surface that combines social engineering with technical misconfiguration.
Exposure of Conversation History and Sensitive Workflows
The third attack vector targets conversation logs and workflow context. Many organizations feed LLMs with support tickets, internal knowledge bases and confidential documents to improve productivity. If an attacker can retrieve historical conversations or internal reasoning traces, they may gain access to personal data, trade secrets or regulated information.
From a regulatory perspective, unauthorized exposure of such data can create obligations under privacy laws and industry standards. This aligns with long-standing guidance from regulators and security bodies: logging and observability must be implemented carefully, especially when logs contain user-generated or business-sensitive content.
CVE-2025-68664 “LangGrinch” and Related Langflow Incidents
One of the vulnerabilities is tracked as CVE-2025-68664 and informally dubbed “LangGrinch”. Cyera previously disclosed details of LangGrinch in December 2025. The maintainers have since released patches, and the flaws have been fixed in updated versions of LangChain and LangGraph. Organizations are strongly advised to upgrade to the latest available releases and verify that vulnerable components are no longer in use.
The disclosure comes shortly after another serious LLM tooling issue: a critical vulnerability in Langflow (CVE-2026-33017, CVSS score 9.3) reportedly entered active exploitation in less than 20 hours after public disclosure. That bug allowed remote data exfiltration from developer environments, illustrating how quickly attackers now weaponize newly published vulnerabilities—similar to what was observed with earlier incidents such as Log4Shell, where mass scanning appeared within days of disclosure according to public reporting.
Horizon3.ai chief architect Navin Sankavalli noted that the root cause of the Langflow vulnerability aligned with CVE-2025-3248: unauthenticated HTTP endpoints capable of executing arbitrary code. This class of bug is well known in the security community and is typically associated with full system compromise when exposed to the internet.
Systemic Risks for the AI Ecosystem and Supply Chain
Cyera highlights that LangChain sits at the center of a dense dependency graph in the AI ecosystem. Hundreds of libraries wrap or extend it, and many commercial platforms embed it under the hood. As seen in prior open-source supply-chain incidents, a single vulnerability in a widely used core component can cascade across downstream libraries, connectors and products that reuse the same code path.
For security teams, this reinforces a key lesson: LLM and AI application security must be treated as an integral part of the organization’s overall cybersecurity strategy, not as an experimental side project. That includes maintaining an accurate software bill of materials (SBOM), monitoring new CVEs affecting AI frameworks, and ensuring timely patching—practices already recommended by bodies such as NIST and widely adopted in mature DevSecOps programs.
To reduce risk when using LangChain, LangGraph and related tools, organizations should limit container and service account privileges, store secrets in dedicated secret-management systems (such as Vault-type solutions), minimize the scope of data accessible to LLM tools, implement input validation and filtering against prompt injection, and automate patching and vulnerability scanning. Regular architecture reviews of AI solutions and targeted training for development teams on secure LLM design can substantially lower the likelihood of a successful attack and help ensure that powerful AI capabilities do not become a new path to compromise.