Security researchers at LayerX have documented a technique they call CometJacking, where attackers embed malicious instructions in URL parameters to steer Perplexity’s agentic AI browser, Comet, into accessing its memory and connected services. In a proof-of-concept, the team demonstrated access to Gmail and Google Calendar artifacts and bypassed platform safeguards by encoding exfiltrated content in base64.
What is Perplexity Comet and why attackers target AI agents
Perplexity Comet is an autonomous AI browsing agent designed to navigate web pages, fill forms, search, and—when permissions are granted—interact with user-linked services such as email, calendars, and booking platforms. This broad “agency” increases utility for users but also expands the attack surface, making Comet a candidate for context hijacking and privilege abuse when prompts or URLs can influence its actions.
How the CometJacking prompt injection works via the collection parameter
According to LayerX, CometJacking is a classic prompt-injection variant that leverages the collection parameter in queries to Comet. Hidden instructions in this parameter direct the agent away from public web sources and toward its internal memory or connected integrations. When those integrations include sensitive services, the agent may retrieve and handle private data as part of “fulfilling” the malicious prompt.
Proof-of-concept: Gmail and Calendar data exfiltration using base64
In testing, the researchers report they were able to obtain Google Calendar invites and Gmail message fields. The malicious prompt instructed Comet to serialize the retrieved content using base64 and transmit it to an external endpoint. Because the data was encoded, the system’s guardrails reportedly failed to recognize the exfiltration, illustrating how simple obfuscation can defeat superficial filtering.
Delivery vectors: phishing links and compromised sites
The practical setup is straightforward: a target receives a crafted link via email, chat, or a website. If Comet processes that URL with elevated permissions, it can execute the embedded instructions without explicit user confirmation. This aligns with common social-engineering patterns, where links serve as the initial access vector for prompt-driven abuses of AI tools.
Vendor response and the debate over agent responsibility
LayerX states it disclosed the issue to Perplexity engineers on August 27–28, 2025. The vendor allegedly categorized the behavior as a “non-actionable prompt injection,” declining the reports. The disagreement underscores a broader industry challenge: drawing a line between “expected agent behavior” and misuse of delegated permissions when instructions are supplied through user-controlled inputs like URLs.
Threat landscape: OWASP Top 10 for LLM and MITRE ATLAS context
The case maps to known risks. The OWASP Top 10 for LLM highlights prompt injection and “excessive agency” as core issues for language-model-driven systems. MITRE ATLAS documents tactics where encoding, obfuscation, and redirection are used to evade rudimentary safeguards and facilitate data loss. The CometJacking scenario illustrates how policy gaps and weak output inspection can combine to enable exfiltration.
Risk mitigation for organizations using AI agents
Minimize privileges and control integrations
Apply least privilege. Restrict OAuth scopes for Gmail and Calendar, disable agent memory and connectors unless necessary, and use separate accounts or environments. Prefer short-lived tokens and explicit consent policies for sensitive actions.
Egress controls and DLP tuned for obfuscation
Implement egress allowlists, block arbitrary external endpoints, and monitor anomalies. Extend DLP rules to detect base64, high-entropy blobs, and DNS/HTTP tunneling patterns, correlating detections with agent activity logs for rapid triage.
Prompt-injection defenses and human oversight
Sanitize and isolate contexts by separating external instructions from system prompts, and enforce “safe templates” and deny policies for sensitive operations. Introduce UI confirmations and “human-in-the-loop” steps for reading emails, sending messages, or exporting data.
Training, testing, and monitoring
Conduct regular red-team exercises targeting prompt-injection and URL-parameter attacks. Instrument detailed agent action logging and alert on exfil attempts. Train staff to recognize risky links and content designed to manipulate AI tools.
As AI agents gain capabilities, risk shifts from the model itself to the permissions and integrations surrounding it. CometJacking demonstrates that even basic encoding can defeat naive guardrails. Organizations should revisit AI-agent threat models, tighten integration and egress controls, and demand secure-by-default designs from vendors—explicit exfiltration blocks, decoding-aware detection for encoded outputs, and step-up confirmations for sensitive actions. These measures reduce attack surface and help sustain trust in agentic AI.