Microsoft has confirmed a bug in Microsoft 365 Copilot, identified as CW1226324, which allowed the AI assistant to summarize emails marked with sensitivity labels, bypassing existing Data Loss Prevention (DLP) policies. The issue reportedly affected emails in Sent Items and Drafts folders, enabling Copilot Chat to process confidential content without proper authorization. Microsoft disclosed that this bug was active from January 21, 2026, until it was addressed on February 3, 2026. While the company has resolved the vulnerability, it has not provided detailed information regarding the scope or impact of the exposure.
The bug raised concerns among enterprises relying on Microsoft 365 for secure communication and data governance. DLP policies are typically used to prevent unauthorized sharing of sensitive information, including financial records, intellectual property and personal data. By bypassing these controls, the Copilot feature temporarily allowed sensitive content to be accessed in AI chat summaries, potentially creating compliance and privacy risks for organizations across industries. Analysts have emphasized that this incident highlights the importance of continuous oversight when integrating AI tools into corporate productivity environments.
Copilot, Microsoft’s AI assistant embedded in Microsoft 365, is designed to help users summarize, draft, and analyze emails and documents efficiently. However, the CW1226324 bug demonstrated that even trusted AI tools can unintentionally override critical security policies if vulnerabilities exist. Microsoft’s internal investigation reportedly identified the root cause and deployed a fix within two weeks of discovery, ensuring that sensitivity labels are now respected by Copilot Chat. The company has advised customers to review audit logs and monitor any unusual AI activity that may have occurred during the exposure window.
While the immediate technical resolution was implemented, the lack of a disclosed impact assessment leaves questions for organizations regarding whether sensitive information was summarized or stored by the AI during the vulnerability period. Cybersecurity experts have pointed out that even brief lapses in DLP enforcement can expose corporations to compliance risks under privacy regulations and contractual obligations. This event underscores the ongoing need for enterprises to carefully monitor AI-enabled productivity tools and maintain robust data governance frameworks to protect confidential communications within cloud environments.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.




