Reprompt Vulnerability in Microsoft Copilot: How One Click Could Expose Your Data

CyberSecureFox 🦊

Security researchers at Varonis have disclosed a previously unknown attack vector against Microsoft Copilot, dubbed Reprompt. The weakness allowed an attacker to hijack an active Copilot session and silently extract a user’s confidential data with a single click on a specially crafted link—no malware installation, browser extensions, or advanced user interaction required.

Microsoft Copilot as an Attack Surface: Why Reprompt Matters

Microsoft Copilot is deeply embedded across the Microsoft ecosystem, including Windows, the Edge browser, and multiple Microsoft 365 applications. Depending on configuration, Copilot can access chat history, email content, documents, cloud files, and other personal or corporate data. This makes it a high‑value target for two rapidly growing classes of attacks: prompt injection and session hijacking.

Varonis discovered that the web version of Copilot accepts instructions via the q URL parameter. By embedding a carefully designed prompt into this parameter, an attacker could cause Copilot to automatically execute instructions as soon as the page loads—effectively acting under the victim’s authenticated identity.

How the Reprompt Microsoft Copilot Attack Works

Phishing with Trusted‑Looking Copilot Links

The Reprompt attack chain starts with a phishing email containing a legitimate‑looking link to Copilot or another Microsoft service. The domain and path appear trustworthy, often using official Microsoft URLs, while the malicious payload is hidden inside the q query parameter as a complex prompt aimed at steering Copilot’s behavior.

When the victim clicks the link, three key steps follow:

1. Copilot automatically processes the attacker‑supplied prompt from the q parameter with no additional user action. To the user, the page appears as a normal Copilot session.

2. The injected prompt instructs Copilot to initiate further interactions with a server controlled by the attacker. This communication becomes a persistent data exfiltration channel, driven entirely by the AI assistant.

3. Reprompt abuses the victim’s already authenticated Microsoft session. According to Varonis, that session may remain valid even after the Copilot tab is closed, giving the attacker a time window to continue harvesting data available to Copilot in the user’s context.

A crucial detail is that subsequent instructions no longer come from the initial URL, but from the attacker’s backend. Client‑side security tools and web filters see only the initial prompt and cannot easily detect what data and commands flow through the follow‑up AI‑driven requests.

Prompt Injection Explained

This behavior exemplifies prompt injection—a technique where an attacker crafts prompts that override or subvert an AI system’s intended instructions. Similar to classic injection flaws (still ranked among the OWASP Top 10 for web applications), prompt injection exploits how large language models (LLMs) interpret and prioritize natural language instructions, often bypassing high‑level safety rules when they conflict with user‑supplied prompts.

Bypassing Copilot Protections with “Double Function Calls”

Microsoft implemented safeguards to prevent Copilot from returning sensitive data on the first web request. Varonis, however, showed that Reprompt can circumvent these defenses using a technique they describe as “double function invocation”.

The researchers prepared a URL accessible to Copilot that contained a hidden secret string, “HELLOWORLD1234”. They then placed a prompt into the q parameter instructing Copilot to call every function twice, compare both outputs, and display only the “better” result.

The outcome was revealing:

On the initial call, Copilot’s protection mechanisms correctly suppressed the secret and refused to display the sensitive value.

On the second call—triggered purely by the prompt logic—Copilot returned the secret string, exposing that the built‑in filter could be bypassed through multi‑step, prompt‑driven interaction.

Varonis published a video demonstration showing how such a URL can extract data from Copilot’s “memory” and access information visible within the victim’s active session context, underscoring that AI‑driven workflows can be abused even when first‑layer checks appear effective.

Which Versions of Microsoft Copilot Were Vulnerable?

According to Varonis, Reprompt affected only Copilot Personal, the consumer‑oriented product for home and individual use. Microsoft 365 Copilot, designed for enterprises, was not susceptible due to additional layers of defense such as Microsoft Purview auditing, tenant‑wide DLP (Data Loss Prevention) policies, and stricter administrative access controls.

Varonis privately reported the vulnerability to Microsoft on 31 August of the previous year. In January 2026, Microsoft released a security update that mitigates Reprompt. At the time of the public disclosure, no evidence of Reprompt being exploited in the wild had been identified, but users are strongly advised to apply the latest Windows and Copilot updates without delay.

Key Security Lessons for AI Assistants and LLM Integrations

The Reprompt case illustrates that AI assistants tightly integrated with operating systems and cloud platforms are becoming a full‑fledged attack surface. They combine broad data access with complex, often opaque behavior, which makes traditional security controls harder to apply.

For individual users, risk reduction should start with timely patching and basic cyber hygiene: treating links with caution even when they appear “official”, limiting application permissions, and regularly reviewing privacy settings in the Microsoft ecosystem.

For organizations, AI security must be addressed systematically. Recommended measures include enforcing centralized DLP policies, activating Microsoft Purview audit for Copilot usage, restricting which data sources AI assistants may access, and regularly testing applications for prompt injection resilience. Security teams should treat LLM‑powered features as they do APIs and web applications: subject to threat modeling, penetration testing, and continuous monitoring.

AI‑enabled productivity tools will continue to expand their reach across both consumer and enterprise environments. Understanding vulnerabilities like Reprompt and proactively hardening AI integrations against prompt injection, session hijacking, and data exfiltration is essential to ensuring that the benefits of Copilot and similar assistants do not come at the expense of security and privacy.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.