Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Prompt Injection, HTML Output Rendering Could Be Used for Exploit

Hackers can exploit vulnerabilities in a generative artificial intelligence assistant integrated across GitLab’s DevSecOps platform to manipulate the model’s output, exfiltrate source code and potentially deliver malicious content through the platform’s user interface.
See Also: On Demand | Global Incident Response Report 2025
Researchers at Legit Security said that prompt injection and HTML output rendering could be used to exploit vulnerabilities in GitLab Duo, and hijack generative AI workflows and expose internal code. GitLab has patched the vulnerabilities.
The Duo chatbot is touted to “instantly generate a to-do list” that prevents developers from “wading through weeks of commits.”
Legit Security co-founder Liav Caspi and security researcher Barak Mayraz demonstrated how GitLab Duo could be manipulated using invisible text, obfuscated Unicode characters and misleading HTML tags, subtly embedded in commit messages, issue descriptions, file names and project comments.
Because Duo reads surrounding project context, such as titles, comments and recent code commits, it can be manipulated using seemingly innocuous text artifacts. These prompts were designed to alter Duo’s behavior or force it to output sensitive information. One commit message included a hidden directive instructing Duo to expose the content of a private file when asked a benign question. Because the assistant lacked strong guardrails, it complied.
GitLab Duo has since updated how it handles contextual input, making it less likely to follow such embedded instructions, but the researchers said that the attack illustrates how even routine developer activity can introduce unexpected threats when AI copilots are in the loop.
Another critical issue was how Duo’s rendered output within GitLab’s web interface. Instead of escaping potentially dangerous content, the assistant’s HTML-based responses were displayed directly, without sanitization. This allowed Legit researchers to insert img
and form
tags into Duo’s responses, which GitLab rendered inside the developer’s browser session. While Legit’s proof-of-concept attacks didn’t escalate to full session hijacking, the presence of interactive HTML in AI responses created the potential for credential harvesting, clickjacking or exfiltration via web beacons.
GitLab Duo is designed to be integrated across development workflows, offering AI-powered support for writing code, summarizing issues and reviewing merge requests. The tight integration can be beneficial for developer productivity, but makes the assistant a powerful and potentially vulnerable attack surface. Legit Security advised treating generative AI assistants, especially those embedded across multiple stages of a CI/CD pipeline, as part of an organization’s application security perimeter.
“AI assistants are now part of your application’s attack surface,” the company said, adding that security reviews should extend to LLM prompts, AI-generated responses and the ways these outputs are rendered or acted upon by users and systems.
GitLab said last year that it has updated its rendering mechanism to escape unsafe HTML elements and prevent unintended formatting from being displayed in the UI. It had also implemented several fixes, including input sanitization improvements and rendering changes to better handle AI output. GitLab added that customer data was not exposed during the research and no exploitation attempts were detected in the wild.