Google Play Privacy Overhaul and Gemini AI Crackdown on Fraudulent Ads

CyberSecureFox

Google is rolling out a significant update to the Google Play privacy policy, reshaping how Android apps access contacts and location data while hardening defenses against account abuse and advertising fraud. Alongside these changes, the company reports that in 2025 it blocked or removed more than 8.3 billion ads worldwide and suspended 24.9 million accounts that violated platform rules, underscoring the scale of ongoing enforcement.

Android 17 Contact Privacy: Contact Picker Replaces Broad READ_CONTACTS Access

The most impactful change for everyday users concerns how apps interact with a user’s contacts. Beginning with Android 17, Google is standardizing the use of a built-in Contact Picker—a secure interface that lets users share only specific contacts and specific fields with an app, rather than granting blanket access to the entire address book.

Historically, many apps relied on the READ_CONTACTS permission, which exposed the full contact database, including names, phone numbers, emails, and often additional metadata. From a cybersecurity perspective, full contact dumps are extremely valuable to attackers: they enable social engineering, targeted phishing campaigns, and account-takeover attempts that abuse trusted relationships.

Under the updated Google Play privacy policy, apps targeting Android 17 are expected to use the Contact Picker or Android Sharesheet as their primary method for accessing contacts. This enables data minimization: a messaging app, for example, can request just a phone number, while a referral feature might need only an email address, not the full contact card.

Use of READ_CONTACTS is no longer acceptable as a default choice. It will be permitted only for apps whose core functionality objectively requires persistent, full-address-book access—such as advanced contact management tools. Developers in this category must submit a Play Developer Declaration through Play Console, justifying why granular, user-mediated access via the Contact Picker is insufficient.

Stronger Android 17 Location Privacy and One-Time Precise Access

Location data is another high-value target for both advertisers and attackers. In Android 17, Google introduces an upgraded location access button that allows apps to request one-time access to precise location for a single operation. This gives users finer control: they see when and why location is requested and can limit the duration and precision of access.

In addition, Android will display a persistent indicator whenever a third-party (non-system) app accesses the user’s location. Continuous visual feedback makes “silent” tracking harder and nudges both users and developers toward more privacy-conscious behavior.

For apps targeting Android 17 or higher that need precise location only for short-lived tasks—such as retrieving a nearby delivery point—Google recommends using a new manifest flag, onlyForLocationButton. Apps that truly require ongoing precise tracking, for example real-time logistics or fleet management solutions, must again file a Play Developer Declaration explaining why approximate or one-time access does not meet their core functional requirements.

Compliance Deadlines and Automated Checks in Google Play Console

The declaration process for apps requesting extended access to contacts or location is slated to be available by October 2026. Starting 27 October, Google Play Console will run automatic pre-submission checks to detect potential violations of the new contact and location access rules before an app or update reaches review.

For developers, this makes an early permissions audit essential. Removing unnecessary READ_CONTACTS requests, tightening location precision and duration, and documenting legitimate uses in advance will reduce the risk of review delays, feature restrictions, or outright removal from Google Play. From a governance standpoint, structured declarations also give Google clearer signals to prioritize high-risk apps for deeper inspection.

Secure Transfer of Google Play Developer Accounts to Stop Fraud

To combat the thriving underground market for developer accounts, Google is introducing a built-in ownership transfer workflow directly in Play Console. From 27 May 2026, the company recommends that all ownership changes—whether due to business sales, internal reorganizations, or asset transfers—be executed exclusively through this native mechanism.

Unofficial methods such as sharing logins, exchanging passwords, or buying and selling accounts on third-party marketplaces are explicitly prohibited. Compromised or traded accounts are a common entry point for threat actors to publish malicious apps, inject malware into updates of previously trusted apps, or launch fraudulent ad campaigns. A centralized, verified transfer process reduces the attack surface and provides clearer audit trails for incident response and legal investigations.

Gemini AI and the Fight Against Fraudulent and Malicious Ads

Scale of Google Ads Policy Enforcement with Gemini

On the advertising side, Google reports that in 2025 more than 99% of policy-violating ads were blocked before they could be shown to users, largely driven by enhancements in its Gemini-based detection systems. Overall, the company removed or blocked 602 million ads and 4 million advertiser accounts associated with fraud or abusive behavior.

In addition, 4.8 billion ads were restricted in their reach, and over 480 million web pages faced enforcement actions for promoting prohibited content such as sexual material, weapons, online gambling, alcohol, tobacco, or malware. By comparison, in 2024 Google suspended more than 39.2 million advertisers, blocked 5.1 billion bad ads, restricted another 9.1 billion, and took action on 1.3 billion pages. The shift in absolute numbers reflects both evolving attacker tactics and the maturation of real-time, AI-driven moderation.

Generative AI as a Tool for Attackers and Defenders

Attackers increasingly rely on generative AI to mass-produce convincing scams that mirror legitimate brands, payment pages, and customer support flows. Instead of crude, easily flagged copy, modern fraud campaigns use polished language and highly tailored creatives, making manual detection extremely difficult.

Gemini addresses this by moving beyond simple keyword filters to semantic intent analysis: the models assess what an ad is trying to achieve, not just which words it contains. This allows Google to identify harmful or deceptive content even when it is deliberately obfuscated to evade traditional rule-based systems. According to the company, by the end of last year the vast majority of Responsive Search Ads in Google Ads were being reviewed almost instantly, with malicious content often blocked at the moment of submission. Google plans to extend comparable near-instant moderation to additional ad formats throughout 2026.

Taken together, the stricter Google Play privacy policy for Android 17 and the expanded use of Gemini AI in Google Ads create a more hostile environment for cybercriminals. Developers should proactively refactor apps to request only the minimum necessary access to contacts and location, prepare accurate Play Developer Declarations, and rely solely on official tools for managing developer accounts. Businesses running ads must expect deeper semantic scrutiny of campaigns and align creative, landing pages, and tracking with Google’s safety policies. Users, in turn, can strengthen their own security posture by reviewing permissions, granting one-time access where possible, and keeping their apps updated to benefit from these new layers of protection.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.