The online advertising ecosystem is entering a new phase of escalation. As threat actors increasingly rely on generative AI to mass‑produce deceptive and malicious ads, Google is responding with its own large AI models. According to the company’s latest Ads Safety report for 2025, the deployment of Gemini has led to record‑level disruption of scam campaigns and malicious advertising across Google’s platforms.
Google Blocks 8.3 Billion Ads and Freezes 24.9 Million Advertiser Accounts
In 2025, Google reports that it blocked or removed 8.3 billion ads and froze 24.9 million advertiser accounts for policy violations. Of these ads, 602 million were directly tied to scam activity—schemes designed to steal money, harvest sensitive data, or distribute malware.
The United States market features prominently in the report. In the US alone, Google states it took down 1.7 billion ads and blocked over 3.3 million advertiser accounts during the year. The most common violations involved abuse of the ad network and campaigns intentionally designed to mislead users, highlighting that fraudulent advertising is not a marginal issue but a systemic risk for the entire digital ad ecosystem.
Malvertising: How Malicious Ads Turn Google’s Reach into an Attack Vector
Google acknowledges that malvertising—the use of online ads to deliver malware or conduct fraud—remains a long‑standing and persistent threat. In a typical scenario, a criminal purchases ads that visually and textually mimic a legitimate brand or service. Victims who click are then redirected to:
- Phishing pages that steal account credentials and personal data;
- Malware installers disguised as useful software or security tools;
- Fraudulent crypto or finance sites targeting digital assets and wallets.
To bypass automated review, attackers rely heavily on cloaking—serving benign content to moderators while delivering malicious pages to real users—and on complex URL redirect chains that hide the final destination behind trusted‑looking domains. Google notes that some recent campaigns have impersonated Google‑owned domains as well as popular software download portals.
Recent examples include fake Google Ads login pages designed to hijack advertiser accounts and malware distributed under the guise of Google Authenticator and Homebrew. These campaigns are particularly effective because they exploit users’ high level of trust in familiar brands and tools.
Generative AI: A Double‑Edged Sword in Advertising Security
Google’s report emphasizes that threat actors are actively integrating generative AI into their operations. With modern AI tools, attackers can quickly generate thousands of ad creatives, landing pages, and localized messages, each slightly different. This scale and variability make it harder for traditional, rules‑based moderation systems to detect common fraud patterns.
To counter this, Google has begun using its Gemini models directly in the ad review pipeline. As Kirat Sharma, Vice President and General Manager for Ads Privacy and Safety, notes, “Attackers use generative AI to mass‑produce misleading ads, and Gemini helps us detect and block them in real time.”
By the end of 2025, Google reports that most Responsive Search Ads were already subject to instant, real‑time screening, with malicious or policy‑violating content blocked at submission time. In 2026, the company plans to extend this approach to additional formats, including display and video advertising, which are frequently abused in large‑scale malvertising campaigns.
From Keyword Filters to Behavioral and Intent‑Based Detection
Historically, ad moderation relied heavily on keyword analysis and static signatures. While effective against simple spam and known malware, this model struggles against dynamic, AI‑generated campaigns that constantly mutate text, imagery, and URLs.
The integration of Gemini represents a shift toward behavioral and intent‑based detection. Instead of focusing only on ad content, Google’s systems now analyze billions of behavioral signals, such as:
- Advertiser account history, including prior violations or sudden changes in activity;
- Unusual patterns in campaign launch, pause, and budget behavior;
- Geographic and targeting anomalies inconsistent with typical advertiser profiles;
- Combined analysis of landing‑page content, visual assets, and technical indicators.
In practice, this allows the system to infer the likely intent of the advertiser rather than relying solely on static rules. According to Google, this approach has already reduced false positives—erroneous advertiser blocks—by around 80%, which is critical for legitimate businesses that depend on continuous ad traffic.
How Businesses and Users Can Reduce Malvertising Risk
Protecting Google Ads and Business Accounts
Even as AI‑driven moderation raises the baseline level of security, organizations should not treat platform controls as their only defense. Recommended measures include:
- Securing Google Ads accounts with multi‑factor authentication and, where possible, hardware security keys to mitigate account takeover.
- Regularly auditing user access, roles, and API integrations in advertising accounts to minimize abuse if credentials are compromised.
- Monitoring for anomalies in campaigns, such as unexplained traffic spikes, unusual spend patterns, or suspicious geographies.
Reducing User Exposure to Malicious Advertising
End users and employees also play a critical role in limiting the impact of malvertising. Security teams should:
- Train staff to recognize phishing pages and to verify URLs, even when arriving via seemingly “trusted” ads.
- Enforce policies to download software and crypto tools only from official vendor sites or reputable app stores, not from ad links without domain verification.
- Combine browser‑level protections, endpoint security, and DNS filtering to block known malicious domains used in ad campaigns.
The arms race between attackers weaponizing generative AI and platforms deploying defensive AI models such as Gemini will continue to accelerate. Malvertising and fraudulent ads are likely to remain a primary vector for compromising both individuals and organizations. Businesses that harden their advertising accounts, integrate ad‑related telemetry into their security monitoring, and continuously educate employees on safe interaction with online ads will be far better positioned to avoid incidents—even when a malicious campaign briefly slips through automated moderation.