Security Researcher Uncovers Severe DDoS Vulnerability in ChatGPT’s API Infrastructure

CyberSecureFox 🦊

A significant security vulnerability has been discovered in ChatGPT’s API infrastructure that enables threat actors to launch powerful DDoS attacks using just a single HTTP request. Security researcher Benjamin Flesch identified this critical flaw, which exploits ChatGPT’s backend attribution system to amplify attack traffic dramatically.

Understanding the Technical Vulnerability

The security flaw resides in the https://chatgpt.com/backend-api/attributions endpoint, which manages web source attributions for ChatGPT responses. The vulnerability stems from inadequate input validation and missing restrictions in the URL parameter processing mechanism, creating a severe security gap in the system’s architecture.

Attack Vector Analysis

The exploitation technique involves sending a single POST request containing multiple slightly modified URLs targeting the same resource. The system processes each URL variant as unique, triggering the ChatGPT-User crawler to generate multiple requests to the target website. This multiplication effect creates a significant attack amplification, with traffic originating from diverse Cloudflare proxy IP addresses.

Impact Assessment

Testing has revealed that a single API request can generate between 20 to 5,000 requests per second to the target system. The distributed nature of these requests, combined with legitimate-looking traffic patterns, makes traditional DDoS protection measures less effective in mitigating such attacks.

Security Control Deficiencies

The vulnerability assessment has identified several critical security control gaps in the ChatGPT API implementation:

  • Lack of URL deduplication mechanisms
  • Absence of URL quantity restrictions per request
  • Insufficient prompt injection protection
  • Inadequate rate limiting for domain-specific requests

Mitigation Challenges

The complexity of this vulnerability is compounded by its distributed nature and the legitimate appearance of the generated traffic. Organizations should implement advanced application-layer filtering, robust rate limiting, and enhanced request validation to protect against such attacks.

Despite multiple attempts to report this vulnerability through established channels including BugCrowd, OpenAI’s security team, Microsoft, and HackerOne, the issue remains unaddressed. This situation highlights the growing importance of implementing comprehensive security controls in AI systems, particularly those offering public API access. As AI technologies continue to evolve, the need for robust security measures becomes increasingly critical to prevent their exploitation for malicious purposes.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.