Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Microsoft Patched Flaw Allowing Attackers to Hijack Copilot Responses

A well-phrased email was all an attacker would have needed to trick Microsoft Copilot into handing over sensitive data until the operating system giant patched the vulnerability.
See Also: Taming Cryptographic Sprawl in a Post-Quantum World
The vulnerability in Microsoft 365 Copilot allowed attackers to extract sensitive data through a zero-click prompt injection attack, said researchers from Aim Security. Dubbed “EchoLeak” and tracked as CVE-2025-32711, the vulnerability received a CVSS severity score of 9.3. Microsoft patched the flaw prior to public disclosure, adding that there is currently no evidence it was exploited in the wild and that users need not take any action.
Copilot, Microsoft’s generative artificial intelligence suite embedded across Office, can summarize emails, draft documents and analyze spreadsheets. Access to Copilot is typically restricted only to users within a given organization, but Aim Security found that the attack could be triggered by sending an email.
Aim said that the exploit chain allows an attacker to craft an email that prompts Copilot to extract and send back highly sensitive contextual data, such as internal documents or messages, without requiring any user interaction or visible indication of compromise.
The mechanics of the exploit hinge on a nuanced form of prompt injection, an attack technique where a user feeds instructions into an AI model to override or manipulate its behavior. The emails bypass detection by disguising themselves as instructions intended for the user, not Copilot, the researchers said. Copilot scans incoming messages to offer summaries or context before a user opens them, enabling the attacker to quietly plant a prompt.
The malicious message included a link to the attacker’s domain, with query string parameters requesting the most sensitive information from Copilot’s memory. The AI then responded by appending that data to the link, sending it back to the attacker-controlled server.
“The attacker’s instructions specify that the query string parameters should be THE MOST sensitive information from the LLM’s context, thus completing the exfiltration,” the research showed.
Copilot is designed to avoid redacting or following unsafe links using markdown formatting. But Aim researchers discovered that reference-style markdowns, which are less commonly used, could bypass this protection. This allowed the malicious prompt to embed links without triggering Copilot’s usual safety filters.
In a proof-of-concept example, the researchers asked Copilot: “What’s the API key I sent myself?” and Copilot responded accordingly. In another, they used markdown quirks to generate an image in the email body, though Microsoft’s content security policy prevented the image from being fetched by the browser.
But that restriction wasn’t a full barrier. The researchers said they ultimately bypassed Microsoft’s URL allowlisting requirement using peculiarities in how SharePoint and Microsoft Teams handle invitation flows, allowing their image payloads to render.
Researchers said flaws like EchoLeak demonstrate that LLM-powered tools are creating new types of vulnerabilities that traditional filters may fail to catch. Microsoft has not provided details on when it became aware of the issue or how it was initially detected.