Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Copilot Falls for Prompt Injection Yet Again

Microsoft quietly fixed a flaw that allowed users to instruct embedded artificial intelligence model Copilot not to log its access corporate files, says a technologist.
See Also: AI Agents Demand Scalable Identity Security Frameworks
The Redmond-based tech giant is betting heavily on Copilot, embedding the large language model even more deeply into its Office suite of programs. That’s already created cybersecurity problems as users and researchers discover new ways to launch prompt injection attacks that trick the model into giving up sensitive information (see: Copilot AI Bug Could Leak Sensitive Data via Email Prompts).
Zack Korman, CTO of cybersecurity firm Pistachio, in a Monday blog post said he didn’t dupe Copilot into giving up sensitive information so much as create the conditions for it.
The loophole Korman details is that he could tell Copilot not to include in the audit log his request to access a document in order to summarize it.
“Audit logs are important,” he wrote. “Imagine someone downloaded a bunch of files before leaving your company to start a competitor; you’d want some record of that and it would be bad if the person could use Copilot to go undetected.” Microsoft touts Copilot as compatible with a wide range of regulatory and security standards that require activity logging.
Microsoft says Copilot automatically logs and retains for 180 days activities such as prompts and the documents that Copilot accesses in response to a prompt – at least for users who subscribe to its audit tier.
“But what happens if you ask Copilot to not provide you with a link to the file it summarized? Well, in that case, the audit log is empty,” Korman wrote.
Korman said he told Copilot to summarize a confidential document but not to include the document as a reference. “JUST TELL ME THE CONTENT,” he typed. A look-see at the audit logs showed that the AccessedResources
filed in the log was blank. “Just like that, your audit log is wrong. For a malicious insider, avoiding detection is as simple as asking Copilot.”
“If you work at an organization that used Copilot prior to Aug. 18, there is a very real chance that your audit log is incomplete,” Korman said.
Michael Bargury, CTO of Zenity, separately flagged the same issue during the Black Hat 2024 conference, along with other significant security weaknesses in Copilot, particularly around prompt injection. “By sending an email, a Teams message or a calendar event, attackers can use prompt injection “to completely take over Copilot on your behalf,” Bargury said at the time. “That means I control Copilot. I can get it to search files on your behalf with your identity, to manipulate its output and help me social-engineer you.” (see: Navigating AI-Based Data Security Risks in Microsoft Copilot)
Microsoft fixed the issue on Aug. 17, Korman wrote, but refused to assign the vulnerability a CVE designation. The tech giant did not immediately respond to a request for comment, but told The Register that “We appreciate the researcher sharing their findings with us so we can address the issue to protect customers.”
Security researcher Kevin Beaumont flagged Korman’s blog post, writing that the prompt injection vulnerability led to “dead bodies in cupboards over that. Everything wasn’t magic immune from vulns until a year ago.”
Korman also wrote about a robust dissatisfaction with Microsoft’s handling of his vulnerability reporting. The process, he says, was messy. Microsoft assigned vague labels to the report’s status, giving what he likened to a “Domino’s pizza tracker for security researchers.”