The Curl project is phasing out its bug bounty program on HackerOne after a sharp rise in low‑quality, often AI‑generated vulnerability reports. Founder and lead developer Daniel Stenberg announced a staged shutdown, arguing that the program’s economics have been undermined by “AI slop” — superficially plausible but technically incorrect submissions that overwhelm a small security team.
Why Curl’s bug bounty decision matters for open source security
Curl and libcurl are critical open source components embedded in everything from firmware and IoT devices to browsers, cloud platforms, and enterprise software. As a result, the integrity of Curl’s vulnerability disclosure process has direct implications for a vast portion of the internet ecosystem. When the maintainers of such a foundational project step back from a major bug bounty platform, it signals structural issues that affect the wider cybersecurity industry.
Stenberg’s decision highlights a growing problem for bug bounty programs: AI‑generated vulnerability reports are cheap and fast to produce, but expensive to review. Without effective filters, this imbalance erodes the value of traditional reward models, especially for lean open source teams that lack dedicated triage staff.
How and when the Curl bug bounty on HackerOne will shut down
The shutdown will not be immediate. Until 31 January 2026, Curl will continue to accept reports via HackerOne, and all open tickets will be processed. Starting from 1 February 2026, however, the project will stop accepting new submissions on HackerOne and move entirely to a direct responsible disclosure model via GitHub.
AI “slop” overwhelms a seven‑person security team
Concerns about AI‑generated “slop” in the Curl bug bounty surfaced publicly in 2024. According to Stenberg, about 20% of all submissions were already being produced with the aid of AI tools, while the overall validity rate collapsed: in the previous year, only around 5% of reports described real, security‑relevant vulnerabilities.
The security team behind Curl consists of just seven people. Each report typically required 3–4 reviewers and anywhere from 30 minutes to three hours to validate. By January 2026, the situation became unsustainable: within the first 16 hours of one week, the project received seven new HackerOne reports, and by mid‑month the team had reviewed more than 20 new submissions — none of which turned out to be a genuine vulnerability. This illustrates a core economic problem: AI makes it trivial to generate large volumes of plausible‑looking reports, while human validation remains slow and costly.
Removing financial incentives for low‑quality bug bounty reports
Stenberg emphasizes that the primary goal of leaving HackerOne is to remove the financial incentive driving low‑quality bug bounty submissions. From the project’s perspective, it is irrelevant whether a report is written by a human or an AI system if it contributes no actionable insight and merely increases noise, burning limited reviewer time.
He acknowledges that abandoning HackerOne will not eliminate weak or misguided reports entirely. However, for a small open source project, reducing the expectation of cash rewards is seen as a necessary step to protect the project’s sustainability and the mental health of its maintainers.
Updated security.txt and shift to direct disclosure on GitHub
To formalize the change, the Curl team has updated its security.txt file — the standard location where websites and projects publish official security contact information and policies. The new policy states explicitly that the project no longer pays monetary rewards for vulnerabilities and will not assist researchers in obtaining compensation from third parties. It also warns that senders of obviously spammy or bad‑faith reports may be blocked and could be publicly called out.
From February 2026 onward, researchers are encouraged to use direct responsible disclosure via GitHub, contacting the maintainers without an intermediary bug bounty platform. For serious security researchers, this lowers administrative friction but removes the structured reward mechanism that many relied on. Since 2019, Curl and libcurl vulnerabilities reported through HackerOne and the Internet Bug Bounty have led to payouts of over USD 90,000 for 81 confirmed vulnerabilities. Ending the program marks a shift from monetized hunting toward collaboration focused primarily on ecosystem resilience.
Key lessons for bug bounty programs in the AI era
The Curl case underscores a critical risk for modern bug bounty programs: mass AI‑generated reports can break the economic model of vulnerability disclosure. Generating pseudo‑technical findings is now almost free, while validating each claim still requires scarce expert time. Without guardrails, the signal‑to‑noise ratio collapses and the program’s value diminishes, particularly for small or volunteer‑driven projects.
To remain effective, organizations running bug bounty programs — whether commercial vendors or open source foundations — should implement strong pre‑triage mechanisms. These include strict report templates, mandatory reproducible proof‑of‑concepts, clearly defined minimum impact thresholds, and occasionally non‑obvious, domain‑specific questions that generic AI tools struggle to answer reliably. Many mature programs are also moving toward private or tiered bug bounty models, granting access primarily to vetted researchers with a track record of high‑quality findings.
For the broader cybersecurity community, Curl’s experience is a reminder that AI is a dual‑use tool. It can assist skilled researchers in code review, fuzzing, and pattern analysis, but in the absence of expertise and accountability it becomes a force multiplier for low‑value spam. Effective defense depends less on the raw number of incoming reports and more on the quality of analysis and the robustness of workflows between researchers and maintainers.
The Curl decision should prompt organizations to reassess their own bug bounty strategies, update responsible disclosure policies, and train teams to both leverage and critically evaluate AI‑assisted submissions. Supporting maintainers, setting clear expectations, and rewarding depth over volume will be essential to keeping vulnerability disclosure workable — and to ensuring that AI strengthens, rather than undermines, software security.