Cloudflare Stops Record 11.5 Tbps DDoS Attack: 5.1 Billion PPS UDP Flood Analyzed

CyberSecureFox 🦊

Cloudflare reports it has mitigated the largest distributed denial‑of‑service (DDoS) attack observed to date, peaking at 11.5 Tbps and 5.1 billion packets per second (pps). The burst lasted roughly 35 seconds, a profile consistent with “hit‑and‑run” volumetric assaults designed to overwhelm bandwidth and packet processing capacity before traditional defenses can react.

Record‑setting DDoS metrics: 11.5 Tbps and 5.1 billion pps

According to Cloudflare, the campaign took the form of a UDP flood targeting L3/L4 infrastructure. The 5.1B pps rate indicates extreme pressure on routing and network stacks, while the 11.5 Tbps figure underscores a massive attempt to saturate upstream links and transit capacity. Together, these metrics paint a picture of a highly optimized, bandwidth‑and‑packet‑intensive attack.

Why Tbps and pps both matter in DDoS defense

Tbps reflects sheer traffic volume that can choke uplinks and carrier circuits. pps captures how many packets network devices must inspect and forward, directly stressing CPUs, TCAM/ACL lookups, and state tables. High pps attacks can cripple routers and load balancers even when overall bit volume appears moderate. In this incident, both dimensions reached extreme levels, amplifying the risk to availability.

Attack sources: cloud services and IoT infrastructure

Cloudflare attributes the malicious traffic to a mix of public cloud and IoT providers, with Google Cloud among the noted origins. This aligns with a persistent trend: adversaries blend compromised “smart” devices with on‑demand cloud instances and hijacked accounts to spin up capacity quickly, mask preparation time, and scale attacks elastically.

Burst or “hit‑and‑run” DDoS tactics

The ~35‑second duration fits a strategy intended to evade slower manual playbooks and device‑level controls. Short, intense spikes test a target’s SLA thresholds, mitigation automation, and routing failover. If protection is not globally distributed (e.g., Anycast‑fronted) with automatic scrubbing, such blasts can cause visible degradation before countermeasures engage.

Trendline: growth from prior records and shrinking reaction windows

The industry’s previous benchmarks were recorded earlier this year: Cloudflare reported neutralizing 7.3 Tbps in June 2025 and 5.6 Tbps in January 2025. The acceleration suggests larger botnet scale and increased abuse of cloud bandwidth. Similar incidents have moved tens of terabytes in under a minute, underscoring how modern DDoS capacity has outpaced many legacy filtering and on‑premises appliances.

Defensive takeaways: architecture, automation, and telemetry

Cloudflare indicates it routinely mitigates “hundreds of hyper‑volumetric” attempts, with a detailed post‑incident report to follow. Effective defense against UDP floods combines a globally distributed Anycast edge, automatic traffic scrubbing, adaptive rate limiting, and deep telemetry to detect burst patterns in seconds.

Organizations should coordinate with carriers to enable diversion to scrubbing centers and prepare BGP announcements for on‑demand rerouting. Where supported, BGP Flowspec can speed policy distribution. At the edge, enforce BCP 38 (RFC 2827) and uRPF to reduce spoofed sources, and default‑deny unnecessary UDP services via port‑/protocol‑specific ACLs.

Capacity planning remains critical: provision headroom, negotiate mitigation‑inclusive SLAs, and tune alerting for both Tbps and pps anomalies. Instrument telemetry to trigger on sub‑minute spikes and rehearse DDoS runbooks with “drills” or chaos‑engineering exercises that measure time‑to‑detect and time‑to‑divert. For cloud and IoT estates, conduct regular account hygiene reviews and apply egress filtering to ensure your own assets are not co‑opted into attacks.

Hyper‑volumetric DDoS campaigns continue to reset the ceiling for bandwidth and packet rates. The 11.5 Tbps / 5.1B pps event is a reminder that reaction windows now compress to seconds, making proactive, distributed mitigation essential. Review routing paths to scrubbing capacity, validate BGP diversion readiness, and enable granular pps/Tbps monitoring. The earlier a spike is detected and filtered, the higher the probability of preserving service availability and meeting SLA commitments.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.