The open source AI gateway LiteLLM by BerriAI is at the center of a serious security incident. A critical vulnerability tracked as CVE-2026-42208 is already being actively exploited to steal sensitive data and LLM provider API keys from exposed deployments, only hours after public disclosure.
LiteLLM CVE-2026-42208: critical SQL injection in API key validation
The flaw CVE-2026-42208 has been assigned a CVSS base score of 9.3 and is classified as a SQL injection vulnerability. The root cause lies in LiteLLM’s proxy API key verification logic, where a user-supplied key was concatenated directly into an SQL query string instead of being passed as a properly parameterized value.
According to the project maintainers, a remote unauthenticated attacker can trigger this bug by sending a crafted Authorization header to any LLM route, such as POST /chat/completions. Through an error-handling code path, the unsanitized value reaches the database layer, enabling malicious SQL to be injected and executed.
This design error opens the door not only to reading arbitrary data from the LiteLLM database, but also to the potential modification of records. In practice, this may lead to full compromise of the LiteLLM proxy instance and the secrets it manages on behalf of downstream applications.
The vulnerability affects LiteLLM releases prior to version 1.83.7-stable. The SQL injection was fixed on 19 April 2026. The definitive list of impacted versions is documented in the official GitHub security advisory GHSA-r75f-5x8p-qvmc, which administrators should consult when assessing exposure.
How the LiteLLM SQL injection works in practice
SQL injection is a long‑standing web application vulnerability where untrusted input is interpreted as part of an SQL command. OWASP has consistently ranked it among the most dangerous risks because it enables attackers to bypass application logic and interact directly with the database.
In the LiteLLM case, the weak point was the logging and error-handling subsystem. When a request failed during API key validation, the value of the key from the Authorization header was interpolated into an SQL statement used for diagnostics. With no input sanitization or parameterization, an attacker could embed SQL fragments within the “key” and alter the structure of the underlying query.
As a result, a single unauthenticated HTTP request could be used to enumerate tables, exfiltrate additional columns, or attempt data manipulation operations. Critically, this occurs before any authentication or authorization checks succeed, making publicly reachable LiteLLM endpoints especially exposed.
Exploit timeline and attacker behavior
Telemetry shared by Sysdig indicates that the first observed exploitation attempt occurred on 26 April 2026 at 16:17 UTC, roughly 26 hours after the GitHub advisory was indexed in the GitHub Advisory Database and within 36 hours of the vulnerability’s public disclosure.
The initial wave of malicious traffic originated from IP 65.111.27[.]132. Researcher Michael Clark describes a two‑phase probing pattern: an initial set of SQL injection payloads was fired from this egress address, followed about 20 minutes later by a shift to 65.111.25[.]67, which continued the injection attempts and added unauthenticated checks against key‑management endpoints.
The attacker’s payloads targeted specific LiteLLM tables, notably litellm_credentials.credential_values and litellm_config, where LLM provider secrets and runtime configuration are stored. Notably, there were no attempts against litellm_users or litellm_team, suggesting the operator had prior knowledge of the schema and was focused on harvesting secrets rather than compromising user accounts.
Why a LiteLLM database compromise is uniquely dangerous
LiteLLM is a widely adopted open source AI gateway with over 45,000 GitHub stars and 7,600 forks. It abstracts multiple LLM providers behind a unified API and centralizes access control, logging, and routing. Consequently, its database often aggregates a high concentration of valuable credentials.
A single row in litellm_credentials may store, for example, an OpenAI organization key with a five‑figure monthly spend limit, an Anthropic console key with workspace administrator privileges, and AWS Bedrock IAM credentials. From a risk standpoint, successful exfiltration of this data can approximate compromise of an entire cloud account, far exceeding the typical impact of a web‑app SQL injection incident.
The situation is aggravated by the fact that, only a month earlier, LiteLLM suffered a supply chain attack attributed to the TeamPCP group, also aimed at stealing secrets from downstream users. Viewed together, these events reinforce a broader trend: AI infrastructure and LLM gateways are becoming high‑priority targets for financially motivated and state‑aligned threat actors.
Mitigation and hardening recommendations for LiteLLM and AI gateways
Organizations running LiteLLM in production should act without delay. The primary mitigation is to upgrade to LiteLLM version 1.83.7-stable or later, which removes the vulnerable SQL pattern associated with CVE-2026-42208.
Where immediate upgrades are not feasible, the maintainers propose a temporary workaround: in the LiteLLM configuration, under general_settings, set disable_error_logs: true. This disables the error‑logging path that injected untrusted data into SQL statements. However, this should be treated strictly as an interim control: it degrades observability and does not resolve the underlying code defect.
In addition to patching, security and DevOps teams should:
- Review LiteLLM logs for anomalous access to LLM endpoints and indicators of SQL injection attempts.
- Rotate all API keys and credentials stored in LiteLLM, including OpenAI, Anthropic, AWS Bedrock, and other providers, and reduce their privileges to the minimum necessary.
- Restrict network exposure of LiteLLM proxies using IP allow‑listing, VPN access, or a Zero Trust access model rather than open internet access.
- Integrate security testing into AI infrastructure by applying SAST/DAST, dependency scanning, and configuration reviews to AI gateways alongside traditional web services.
- Adopt centralized secrets management (for example, cloud KMS or dedicated vaults) and avoid long‑lived, high‑privilege keys wherever possible.
The case of GHSA-r75f-5x8p-qvmc illustrates a new reality for AI infrastructure: a critical pre‑authentication vulnerability in a widely trusted project can move from disclosure to exploitation in a matter of hours, not weeks. Organizations relying on LLMs and AI gateways should assume that any publicly announced flaw will be actively targeted almost immediately and shape their patching, monitoring, and secret‑management processes accordingly. Those that invest now in rapid update pipelines, strict key hygiene, and tight network controls will be far better positioned to withstand the next wave of attacks against AI platforms.