Cybersecurity analysts have identified a large-scale advertising fraud operation that combines SEO poisoning, AI-generated content and aggressive misuse of browser push notifications. The campaign, dubbed Pushpaganda, plants fake news stories in Google Discover feeds, then coerces users into enabling intrusive notifications that funnel them to scareware pages and financial scams.
AI-based ad fraud targets Android and Chrome users worldwide
According to the Satori Threat Intelligence and Research team at HUMAN, Pushpaganda is designed to exploit the personalized content feeds of Android users and the Google Chrome browser. At peak activity, the infrastructure generated roughly 240 million ad bid requests in just seven days, distributed across 113 attacker-controlled domains.
The operation initially concentrated on users in India before expanding to the United States, Australia, Canada, South Africa and the United Kingdom. HUMAN notes that the threat actors are abusing user trust in Google Discover, effectively turning a legitimate content recommendation channel into a delivery mechanism for scareware, deepfakes and fraudulent financial schemes.
Google has deployed additional protections to reduce the visibility of such spam and malicious content in its services. However, the tactics used in Pushpaganda underscore how vulnerable modern adtech and discovery ecosystems remain to blended attacks that combine AI, SEO manipulation and social engineering.
How the Pushpaganda SEO poisoning and malvertising chain works
The core of Pushpaganda is search engine poisoning and aggressive optimization of fake news sites for Google Discover. Attackers publish articles filled with AI-generated text, styled as timely news, market analysis or breaking stories. Through keyword stuffing, link schemes and clickbait headlines, these pages gain visibility and are surfaced in users’ Discover feeds as seemingly legitimate content.
Once a user taps such a story, they land on a site that immediately pushes them to allow browser push notifications. Technically this is a standard browser feature, but here it is weaponized as a persistent, high-trust communication channel from the attacker to the victim’s device.
After the user grants permission, they begin to receive a continuous stream of notifications with urgent, threatening or alarming messages — for example, fake legal complaints, fabricated security alerts or claims about critical system problems. Each notification contains a link that redirects the user to additional attacker-controlled domains.
These landing pages host scareware, investment scams, fake technical support pages and other high-pressure frauds. At the same time, they are densely packed with advertising. Clicks and impressions generated from real users on real devices translate into illicit advertising revenue. Because the traffic looks “organic” and user-driven, it is harder for traditional anti-fraud systems to distinguish it from legitimate ad engagement.
Malicious push notifications as an evolving social-engineering vector
The abuse of web push notifications is not new, but Pushpaganda demonstrates how this vector is evolving. In 2025, researchers at Infoblox documented an actor known as Vane Viper that systematically exploited browser notifications for aggressive advertising and social-engineering attacks similar to so-called ClickFix schemes.
HUMAN experts emphasize that notifications inherently convey urgency. As Lindsey Kay, Vice President of Threat Intelligence at HUMAN, has noted, many users “click simply to make the notification disappear or to find out more,” which makes this channel particularly attractive for both malware distributors and ad fraud operators. Once permission is granted, attackers gain a durable foothold that can bypass many browser-based safeguards.
Low5 and BADBOX 2.0: shared infrastructure for laundering ad traffic
The disclosure of Pushpaganda follows an earlier HUMAN report on another large-scale ad fraud ecosystem known as Low5. That operation involved more than 3,000 domains and at least 63 Android applications, and is considered one of the largest identified markets for ad traffic “laundering”.
At its peak, Low5 generated up to 2 billion bid requests per day, impacting an estimated 40 million devices worldwide. Compromised mobile apps contained code that silently forced user devices to visit specific domains and click on ads without any visible user interaction.
These domains acted as cashout or “ghost” sites — pages with no genuine audience where artificially created traffic is routed solely to monetize fraudulent ad impressions and clicks. Portions of this domain infrastructure were also leveraged in more complex schemes, including BADBOX 2.0. Although the malicious Android applications were eventually removed from Google Play, many of the underlying domains remain valuable to other threat actors.
Why ad fraud infrastructure persists after campaign takedowns
HUMAN’s analysis indicates that Low5 relied on a single, shared monetization layer interconnecting more than 3,000 domains. Such an architecture allows multiple criminal groups to plug into the same backend infrastructure, creating a distributed, resilient ecosystem for laundering ad traffic.
This model significantly increases threat durability, complicates attribution and enables the rapid launch of new campaigns following the shutdown of older ones. Even when a specific fraud operation or app cluster is disrupted, cashout domains can be quickly repurposed by other actors. Effective defense therefore requires continuous, proactive monitoring and pre-bid filtering of suspicious domains by ad exchanges, blocking them before they participate in auctions.
Taken together, Pushpaganda and Low5 illustrate how AI-generated content, SEO manipulation and adtech abuse are converging into sophisticated, scalable fraud ecosystems. Users should critically evaluate any request to enable browser notifications, regularly review and prune allowed sites, keep Android and Chrome updated, and consider security tools that block malvertising. Advertisers and platforms, in turn, need to strengthen traffic-quality controls, integrate high-quality threat intelligence and closely scrutinize sudden spikes in “organic” activity. Only a coordinated, multi-layered approach can reduce the profitability of such ad fraud schemes and better protect both end users and advertising budgets.