Analyst firm Gartner has released a report titled “Cybersecurity Should Block AI Browsers for Now”, advising organizations to temporarily restrict or fully block the use of AI-powered browsers in corporate environments. According to the report, the current generation of such tools introduces disproportionate cybersecurity and privacy risks, ranging from silent data leakage to unauthorized transactions carried out on behalf of employees.
What Are AI Browsers and Why They Threaten Enterprise Security
Gartner uses the term AI browsers for products such as Perplexity Comet and ChatGPT Atlas, which combine a traditional web browser with an integrated sidebar based on large language models (LLMs). These assistants can analyze the content of active web pages, see browsing history, and perform actions online as semi-autonomous agents — from filling out forms to navigating and interacting with websites.
The report’s authors — Dennis Xu (VP Analyst), Evgeny Mirolyubov (Sr Director Analyst) and John Watts (Analyst) — stress that default settings in most AI browsers prioritize user convenience over security and compliance. That typically means broad data collection, extensive powers for AI agents, weak alignment with corporate security policies and little transparency for security teams.
Data Leakage via AI Sidebars and Cloud Backends
The core technical concern highlighted by Gartner is how AI browsers handle data. To deliver context-aware assistance, sidebars often upload to the provider’s cloud backend the active page content, browser history, and information about open tabs. In an enterprise context, this may include trade secrets, customer personal data, internal documents, source code and confidential communications.
Gartner recommends that organizations assess the security of each AI browser’s backend: where data is stored, how it is encrypted, whether it is used to train models, how long it is retained, and which third parties can access it. If these controls are unclear or insufficient, Gartner advises blocking installation and use of AI browsers across the workforce.
The risks are not hypothetical. Past incidents with general-purpose generative AI tools — for example, engineers at large enterprises inadvertently pasting proprietary code and incident reports into public chatbots — have already led to internal bans and regulatory scrutiny. Similar behavior inside AI browsers could quietly exfiltrate sensitive data at scale, complicating compliance with regulations such as GDPR, HIPAA or sector-specific data protection laws.
Even when a service passes an initial assessment, Gartner advises warning employees that any content rendered in a browser tab with an active AI sidebar may be transmitted to the cloud. As a practical safeguard, organizations should avoid viewing or processing sensitive data in tabs where the AI assistant is enabled, or disable the AI functionality entirely on segmented, high-security networks.
Agentic AI in Browsers: A New Attack Surface
A separate class of risks arises from the agent capabilities of AI browsers. When granted permission, these agents can click links, navigate sites, submit forms, initiate purchases and perform other operations without the user confirming every individual step.
Prompt Injection and Unintended Actions
Gartner highlights the danger of indirect prompt-injection attacks. An attacker can embed hidden instructions in a web page (for example, in invisible text or metadata). When the AI agent “reads” the page, it interprets those instructions as system-level commands, potentially causing it to exfiltrate confidential data to a hostile domain or execute other malicious actions. This threat aligns with the growing focus on prompt injection and data exfiltration in initiatives such as the OWASP Top 10 for LLM applications.
Another concern is incorrect actions caused by flawed model reasoning. LLMs are prone to “hallucinations” and logical errors, particularly in complex business workflows. Within an AI browser, this can escalate from wrong answers into wrong actions: misfilled legal or tax forms, erroneous changes in back-office systems or unintended financial transactions.
Credential Theft and Phishing Automation
If an AI browser is allowed to access corporate password managers or session cookies, it may automatically enter credentials on a phishing site that mimics a legitimate service. In such scenarios, an organization can lose control of high-value accounts without any explicit user interaction, making detection and forensics significantly harder.
Employee Misuse and Procurement Errors
Gartner also points to the human factor. Employees may leverage AI browsers to automate mandatory but routine tasks they would prefer not to perform personally. A typical example is delegating required security-awareness training to the AI agent. While the learning management system records 100% completion, the real cybersecurity awareness of staff remains low, weakening an important defensive layer.
In procurement and travel booking systems, LLM-driven form filling can cause subtle errors — ordering the wrong items, booking tickets for incorrect dates or approving non-compliant suppliers. In large organizations, such mistakes aggregate into significant financial losses and operational disruption.
Gartner’s Recommended Security Controls for AI Browsers
Gartner concludes that AI browsers are currently too risky to deploy broadly without rigorous security due diligence. The firm recommends that organizations:
- Conduct a formal risk assessment for each specific AI browser and its cloud backend, involving security, legal and privacy teams.
- Define prohibited use cases, such as interaction with critical production systems, payments, procurement, HR records and privileged admin accounts.
- Restrict AI agent permissions by default, blocking access to email, financial platforms, internal admin consoles and password managers unless a strong business case exists.
- Enforce data minimization policies: disable model training on corporate data where possible, limit what content can be sent to the cloud, and apply network controls such as TLS inspection and egress filtering.
- Train employees on AI-specific risks, clarifying which categories of data must never be exposed to AI sidebars and how to recognize unsafe automation scenarios.
Against the backdrop of Gartner’s position, organizations should revisit their AI usage policies, explicitly address AI browsers in security standards, and integrate these tools into broader AI governance and risk-management frameworks. Enterprises that act now — by inventorying AI tools, setting clear guardrails and implementing technical controls — will be better positioned to exploit the productivity benefits of artificial intelligence without accepting unacceptable cybersecurity, privacy and compliance risks.