Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Researcher Found Bug Could Exfiltrate Secrets Via Camo Images

A now-patched flaw in GitHub Copilot Chat could have enabled attackers to steal source code and secrets by embedding hidden prompts that hijacked the artificial intelligence assistant’s responses. The exploit also used the repository platform’s image proxy to leak the stolen data.
See Also: AI Agents Demand Scalable Identity Security Frameworks
The vulnerability, discovered by Legit Security researcher Omer Mayraz, combined a remote prompt injection with an inventive bypass of GitHub’s content security policy. It used Cam, the platform’s image proxying service, to pull private data out of repositories.
GitHub Copilot Chat is an AI assistant built into GitHub that helps developers by answering questions, explaining code and suggesting implementations directly in their workflow.
The flaw combined two issues: hidden pull-request comments and other content that Copilot read were not properly isolated or validated. The other was the way GitHub’s image proxy handled external images could be abused. By preparing signed image links and having Copilot assemble them, the researcher was able to turn a security feature into a channel for stealing data.
The researcher said he reported the issue through HackerOne. GitHub responded by disabling image rendering in Copilot Chat and says the vulnerability was fixed as of Aug. 14.
Copilot Chat’s usefulness stems from its context awareness. It reads repository files, pull requests and other workspace artifacts so it can tailor answers to a developer’s code. The researcher made Copilot “notice” an injected instruction by hiding a prompt inside a pull request description. GitHub supports hidden content through an official feature for comments. When the researcher moved the prompt into a concealed comment, the assistant still picked it up. The injected instruction affected other users who visited the pull request page, demonstrating that hidden prompts could influence Copilot responses for anyone who loaded the same context.
“Our research was limited to pull-request comments, which is an interface any developer on the planet can utilize against open-source projects,” said Liav Caspi, CTO and co-founder of Legit Security. “There might be other places where hidden prompts can be embedded, but the attack chain will be much harder for an anonymous actor.”
Once Copilot read the hidden instruction, the attacker could make it display messages, suggest fake code, or insert links and images in its replies. Since Copilot has the same access as the user running it, the hidden commands could tell it to search private code for sensitive information, turn that data into readable text and include it in its responses.
GitHub usually blocks web pages from loading images from unknown sites and routes any external images through its own proxy, Camo. When a user adds an external image, GitHub converts the link into a camo.githubusercontent.com address with a cryptographic signature, allowing only those signed links to be accessed. That setup stops outsiders from tricking a browser into fetching attacker-controlled content directly from a user’s session.
The researcher found a way around GitHub’s image protections by creating pre-approved image links for letters and symbols. He trained Copilot to use these images like pieces in a puzzle to spell out text. Each image linked to the researcher’s server, so when GitHub’s proxy loaded the images, it sent along the encoded data hidden inside them.
“Technically, yes,” Caspi said when asked if similar image proxies could be abused elsewhere. “The concept that was found is the ability to steal data by making the AI download images and encoding the message in the download requests. It is likely that it is possible to manipulate other AI systems in a similar way – ask the agent to perform a series of seemingly innocent tasks on sensitive input, which can leak that sensitive data to a threat actor.”
To actually steal data, the researcher converted pieces of private code into a simple text format and added them to the pre-approved image links. He also added random numbers to prevent caching. On their side, a basic web server sent back to invisible one by one pixel images, so nothing showed up in the user’s chat. As GitHub’s proxy fetched the images, it carried the encoded data, bypassing usual browser security rules.
The researcher proved the method could find specific secrets, for example, the token AWS_KEY
, and send them out from private repositories. The attack could also manipulate Copilot for other users, showing formatted text, code and links that looked legitimate.
“Had it not been patched by GitHub, it is very likely this manipulation could be carried out by a threat actor without getting caught,” Caspi said. “The only caveat is that it can be used to steal a small piece of data, like a secret token or security issue, but not large-scale code theft.”
Caspi said that Copilot users can take basic hardening steps like reviewing data before sending and putting together configurations like proper ‘ignore’ files that deny access to sensitive files. But he added it is hard to protect against prompt injection and only network monitoring can eventually find out if the AI system is sending sensitive info to a third party.