Researchers at SquareX have detailed a new attack surface for agentic AI browsers—AI Sidebar Spoofing—that lets malicious browser extensions overlay a fake assistant sidebar on top of the genuine interface. The spoofed panel captures user input and steers decisions invisibly, putting products such as OpenAI’s ChatGPT Atlas and Perplexity’s Comet at risk.
How UI spoofing works against agentic AI browsers
SquareX reports that a malicious extension can inject JavaScript into visited pages and draw a near pixel-perfect clone of the AI assistant sidebar above the real one. This overlay intercepts clicks, keystrokes, and contextual data. Notably, the technique relies only on common extension permissions—host and storage—widely used by legitimate tools like password managers and writing aids.
By impersonating the assistant’s UI, the attack subverts user trust in the browser’s interface layer and undermines “ask about this page” workflows. Users believe they are interacting with the built-in assistant, while their prompts and actions are actually mediated by a malicious façade.
Why agentic capabilities raise the stakes
Agentic modes in Atlas and Comet allow LLMs to take actions—booking services, making purchases, filling forms, and executing multi-step tasks. When the UI is spoofed, any automated workflow can be redirected to fraud: harvesting payment data, initiating unauthorized account changes, or manipulating high-impact user decisions.
Demonstration and impacted products
SquareX demonstrated practical exploitation against Perplexity’s Comet using Google Gemini, adjusting settings so the assistant issued harmful guidance under certain triggers. After Apple’s macOS release of ChatGPT Atlas, researchers confirmed the same sidebar-spoofing behavior. According to SquareX, outreach to Perplexity and OpenAI received no response.
Abuse scenarios aligned with current threat trends
SquareX highlights three plausible misuse cases: (1) crypto-themed prompts that redirect to polished phishing sites; (2) OAuth consent phishing via a fake file-sharing “app” that coaxes users into granting access to Gmail and Google Drive; (3) “install this tool” guidance that actually delivers a backdoor, enabling persistent remote access. These patterns mirror well-documented techniques in the ecosystem.
Industry data underscores the risk. The Verizon DBIR consistently identifies phishing and social engineering among top breach contributors, while OAuth abuse remains a durable way to bypass passwords without directly cracking them. Extension-based threats—permission abuse and UI spoofing—fit known browser threat models and have long motivated calls for stricter permission minimization and store moderation.
Risk mitigation for users, enterprises, and vendors
Practical steps for users and organizations
Limit agentic features to low-risk tasks. Avoid delegating email, finance, or sensitive data operations to the assistant. Treat assistant-suggested commands as untrusted until verified, especially those involving software installation or terminal execution; enforce allowlists and block unknown binaries/scripts.
Maintain extension hygiene: routinely audit installed extensions, remove unused items, verify requested permissions, favor vetted sources, and separate work and personal profiles. Review OAuth grants periodically, revoke unused tokens, and enable multi-factor authentication and security alerts for cloud and email providers.
Engineering controls for AI browser vendors
Implement a trusted UI zone anchored in the browser chrome that is inaccessible to page-level DOM overlays. Add anti-spoofing signals—secure authenticity indicators, watermarking, periodic integrity checks of the sidebar, and overlay detection events—to alert users and block manipulation.
Harden extension policies: isolate content contexts, minimize and scope host permissions, adopt Permission Prompt Transparency, and provide granular logs of assistant interactions for auditing and anomaly detection.
What this means for LLM-integrated browsing
As browsing blends conversation with action, the assistant interface becomes a security boundary, not just a UX element. AI Sidebar Spoofing shows that compromising the UI can silently redirect high-impact workflows. Secure-by-design approaches—least privilege, strong UI provenance, and robust extension governance—are essential to maintain trust in agentic AI.
Organizations and users should reinforce basic hygiene now: audit extensions, control OAuth access, minimize privileges, and monitor vendor advisories. For vendors, investing in verifiable, tamper-resistant assistant UI and stricter extension models will meaningfully reduce the risk of sidebar spoofing and follow-on compromise.