The rapid integration of AI assistants into web browsers — from Copilot in Microsoft Edge and Gemini in Google Chrome to Comet by Perplexity — has made it easy to summarize, analyze, and interact with web content. Research by Cato Networks now shows that these capabilities also introduce a new attack class, dubbed HashJack, which abuses the browser address bar to silently inject prompts into AI models.
What is the HashJack attack in AI browsers?
The HashJack technique exploits how browsers process URLs. Everything after the # symbol (the URL fragment or hash) is handled only on the client side and is never sent to the web server. Traditionally, this mechanism is used for in‑page navigation, anchors, or managing state in single-page applications (SPAs).
Cato Networks researchers demonstrated that an attacker can take a fully legitimate URL, append a “#”, and then add hidden instructions for the AI assistant — including commands, data-exfiltration requests, or social-engineering content. To the server, the request appears benign because the fragment is never transmitted. However, an embedded AI assistant that analyzes the page and its context can see and process this fragment as part of the user’s task.
URL fragments as a covert channel for prompt injection
In practice, HashJack turns the URL fragment into a covert control channel for the AI model. When a user asks the assistant to “summarize this page” or “answer questions about this content,” the language model receives not only the visible page text, but also the malicious prompt hidden after the “#”. This creates an indirect prompt injection: the AI follows the attacker’s secret instructions instead of the user’s apparent intent.
Why HashJack is a new class of indirect prompt injection
Prompt injection has been recognized for several years as a primary risk for generative AI. The OWASP Top 10 for LLM Applications, for example, lists prompt injection as the top category of concern. Traditionally, attackers must control the content that the model reads — such as a compromised website, a document, or a dataset.
The distinctive aspect of HashJack is that it is the first documented technique that can turn any trusted site into a prompt injection vector without compromising the site itself. The page remains legitimate and unchanged. The only thing the attacker needs to control is the link the user clicks, for instance in email campaigns, chat messages, shared documents, or online advertisements, where the URL is silently modified to include a “#” and a malicious fragment.
Realistic exploitation scenarios for AI browser assistants
In their tests, Cato Networks showed that in AI-enabled browsers with agent-like capabilities—such as Perplexity’s Comet—HashJack can be used to trigger data exfiltration toward attacker-controlled servers. Guided by the hidden instructions in the URL, the AI agent may collect browsing history snippets, page content, or even sensitive data typed by the user, and then send it out as part of an apparently legitimate action.
Other scenarios involve forcing AI assistants to generate phishing links, provide misleading recommendations, or distort the interpretation of page content. The risks are especially acute in high-stakes domains such as healthcare, finance, and legal services. If an AI assistant, influenced by a hidden prompt, suggests an incorrect medication dosage or a risky financial move, the impact can go far beyond digital losses.
Vendor responses: Google, Microsoft, and Perplexity
According to Cato Networks, vendors were notified in advance: Perplexity in July, and Google and Microsoft in August. The responses varied. Google reportedly classified the behavior as “expected”, assigned it low priority, and declined to modify Chrome or Gemini to mitigate HashJack-specific risks.
Microsoft and Perplexity, by contrast, released updates to their AI-enhanced browsers aimed at reducing the likelihood of indirect prompt injection via URL fragments. Microsoft indicated that protection against such attacks is treated as a “continuous process,” with each new prompt injection technique evaluated and addressed separately.
Why traditional security controls miss HashJack attacks
Conventional web security tools — including WAFs, server-side filters, and HTTP traffic monitoring systems — do not see anything after the “#” in a URL, since this fragment is never transmitted over the network. As a result, server-side rules, IDS/IPS signatures, and central proxies cannot inspect or block malicious prompts embedded in the hash.
Additionally, many security architectures focus on scanning website content and downloadable files, but do not yet model the combined behavior of the “browser + AI assistant”. The AI logic that interprets hidden context in URL fragments effectively sits outside the visibility of classic URL reputation systems and standard anti-phishing solutions.
Defensive strategies against HashJack and AI prompt attacks
Mitigating HashJack requires a multi-layered defense strategy. At the organizational level, it is advisable to introduce governance around AI tool usage: restrict the set of allowed AI browser assistants, centrally manage their security policies, and disable high-risk features (such as autonomous actions or unrestricted external calls) where there is no clear business need.
On the technical side, organizations should implement client-side URL filtering and normalization, whether in secure browser extensions, endpoint agents, or secure web gateways. Suspicious or unusually long fragments after “#” — especially those that resemble natural language instructions — should be blocked, sanitized, or at least flagged as unsafe for AI processing.
Equally important is monitoring AI assistant activity within browsers. Logging AI queries and responses, scanning for atypical behavior (such as unexpected external URLs or attempts to transmit internal data), and integrating Data Loss Prevention (DLP) controls can all help limit what information AI tools are allowed to send to external services.
User awareness remains a critical control. Employees should understand that a familiar domain name and a trusted browser interface do not guarantee that the AI-generated answer is safe or unbiased. Any AI advice affecting financial decisions, health, or sensitive operations should be cross-checked against independent, authoritative sources before action is taken.
The emergence of HashJack illustrates how AI-powered browsers expand the attack surface from traditional web content to the context consumed by language models. Protecting only websites and network traffic is no longer sufficient; security teams must consider the full chain of “user — browser — AI assistant — external service.” Organizations that proactively adapt their threat models, monitoring, and controls to include AI behavior will be better positioned to ensure that a simple click on a URL with a “#” does not become the entry point for a serious cyberattack.