Microsoft 365 Copilot Bug Bypasses DLP and Sensitivity Labels for Confidential Emails

CyberSecureFox 🦊

A recently identified bug in the corporate version of Microsoft 365 Copilot allowed the AI assistant to read and summarize confidential emails, even when Data Loss Prevention (DLP) policies and sensitivity labels should have blocked such access. The incident highlights a growing class of risks where AI services embedded in office platforms interact incorrectly with enterprise security controls.

What Went Wrong in Microsoft 365 Copilot Chat

According to information disclosed by Microsoft, the issue affected the Copilot Chat component—an AI chatbot integrated into Word, Excel, PowerPoint, Outlook and OneNote, with access to a user’s work context. Copilot Chat is designed to answer questions, generate summaries of emails and documents, and automate routine tasks based on enterprise content.

The malfunction, recorded on 21 January and tracked under the identifier CW1226324, appeared in the Work tab of Copilot Chat. In this context, the assistant improperly processed messages stored in the Sent Items and Drafts folders, including emails marked as confidential through Microsoft 365’s built-in information protection features.

The core problem was that Copilot effectively ignored DLP rules and sensitivity label restrictions. Under normal conditions, these controls are intended to prevent automated systems—including AI assistants—from analyzing particularly sensitive content or exposing it in generated responses.

How Copilot Bypassed DLP Policies and Sensitivity Labels

The Role of DLP Policies and Sensitivity Labels in Microsoft 365 Security

In Microsoft 365, sensitivity labels and Data Loss Prevention (DLP) policies are key mechanisms for protecting critical information such as trade secrets, personal data and financial records. Sensitivity labels classify content (for example, “Confidential – Finance”) and can enforce encryption, access limitations and usage restrictions. DLP policies then monitor where such data is stored or shared and block risky operations, such as copying, forwarding or exporting.

When configured correctly, these controls are also used to limit how automated or external services process protected content, including AI tools like Microsoft 365 Copilot. In this incident, however, emails carrying sensitivity labels were still available for reading and summarization inside Copilot Chat, pointing to a logical error in Copilot’s access-checking logic. Microsoft has acknowledged a “bug in the code” but has not publicly shared technical implementation details.

Why Drafts and Sent Items Represent a High-Impact Risk

The Drafts and Sent Items folders typically contain some of the most sensitive and explicit information in corporate email systems. Drafts often include unfiltered ideas, preliminary negotiations and raw data. Sent messages usually contain finalized wording, exact figures, internal links, personal data and legal or financial details.

When DLP enforcement fails in these specific locations, the impact can be severe. Leakage from Drafts or Sent Items can expose the full context of negotiations, contract clauses, internal strategies and financial terms. If an AI assistant is able to summarize or re-share this content in response to broad prompts, exposure can extend well beyond the intended audience—even without any external compromise.

Business Risks and Potential Consequences for Microsoft 365 Customers

Microsoft began rolling out a fix in early February, but has not disclosed exact timelines for full remediation across all tenants, nor the total number of affected organizations. The company notes that the impact assessment may evolve as the internal investigation continues.

Even if no data was directly exfiltrated to external parties, the mere fact that a generative AI service could operate outside existing DLP boundaries is a serious concern. Internal information may have been used in Copilot responses, surfaced to colleagues lacking appropriate permissions, or made accessible to a broader group than intended by the organization’s access control model.

Recent industry reports, including the Verizon Data Breach Investigations Report and ENISA threat analyses, consistently emphasize that a growing proportion of incidents stem not from classic external hacks, but from misconfigurations and logic flaws in cloud and SaaS services. As AI becomes more deeply embedded in productivity tools, the integrity of authorization checks and policy enforcement within these AI features becomes as critical as traditional perimeter defense.

Practical Steps Organizations Should Take Now

The CW1226324 case demonstrates that even mature cloud platforms are not immune to AI-related logic errors affecting confidentiality. Organizations extensively using Microsoft 365 Copilot and similar AI tools can reduce exposure by taking several concrete actions.

1. Reassess AI usage policies for high-sensitivity roles. It can be prudent to temporarily limit Copilot Chat usage for teams handling highly sensitive data—such as legal, finance, M&A or R&D—or to separate accounts and environments used for routine tasks from those used for critical data handling.

2. Audit DLP policies and sensitivity labels with AI in mind. Security teams should verify how DLP rules, sensitivity labels and conditional access policies are configured in Microsoft 365, ensuring that AI processing is explicitly restricted for the most sensitive data types wherever possible. These policies should be tested against realistic Copilot usage scenarios rather than only theoretical workflows.

3. Strengthen monitoring, logging and anomaly detection. Organizations should enable and regularly review audit logs relating to content access and AI queries, including Copilot prompts that reference confidential email or document repositories. Suspiciously broad or unusual queries should be investigated and, where feasible, automatically flagged.

4. Integrate AI assistants into a Zero Trust architecture. A Zero Trust model assumes that no user, device or service—internal or external—is inherently trusted. AI assistants such as Copilot must be treated as separate applications with explicitly defined and minimal read and processing permissions. Least-privilege principles should govern which mailboxes, SharePoint sites or Teams channels the assistant can access.

The Microsoft 365 Copilot bug identified as CW1226324 serves as a clear reminder: when deploying AI in corporate environments, it is not enough to focus on productivity gains alone. Organizations need robust layers of control, validation and monitoring around how AI interacts with protected data. Companies that proactively revisit their DLP strategy, educate staff on responsible AI use and align Copilot deployments with Zero Trust principles will be far better positioned to prevent critical data exposure in future AI-related incidents.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.