Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Zero-Click Vulnerability Let Attackers Weaponize Enterprise AI Assistant

Google patched a vulnerability in Gemini Enterprise that allowed attackers to steal corporate data through a shared document, calendar invitation or email without any user action or security alerts.
See Also: A CISO’s Perspective on Scaling GenAI Securely
Noma Labs discovered the vulnerability, christened GeminiJack, in Google Gemini Enterprise after first spotting it in Google’s Vertex AI Search, a separate enterprise search product. Google collaborated with Noma Labs to validate the findings and deployed updates that changed how Gemini Enterprise and Vertex AI Search interact with their underlying retrieval and indexing systems.
The attack exploited how enterprise artificial intelligence systems interpret information. Attackers embedded hidden instructions inside shared documents. When employees performed standard searches in Gemini Enterprise, the AI automatically retrieved the poisoned document and executed the embedded instructions. Since Gemini Enterprise has access to organizational Gmail, Calendar, Docs and other Workspace data sources, those instructions triggered the AI to search across all of them. The results were sent to attackers using disguised external image requests.
From a security team’s perspective, no malware was executed, no credentials were phished and no data left through approved channels. Data loss prevention tools saw nothing unusual.
A single prompt injection could steal years of email correspondence containing customer data, financial discussions and strategic decisions. Complete calendar histories revealing business relationships, deal timelines and organizational structure were accessible. Entire document repositories including confidential agreements, technical specifications and competitive intelligence could be compromised.
Attackers did not need to know organizational charts, customers or projects. Generic search terms like “confidential,” “API key,” “acquisition,” “salary” or “legal” let the AI do the work.
Google Gemini Enterprise’s search feature implements a retrieval-augmented generation architecture, a system that allows AI to pull and combine information from multiple data sources to answer queries. The architecture enables organizations to query across Gmail, Google Calendar, Google Documents and other Google Workspace components. Organizations must pre-configure which data sources the system can access. Once configured, the system has persistent access to these data sources for all user queries.
The attack involved four steps. First, the attackers created normal-looking Google Docs, Google Calendar events or Gmail messages and shared them with someone in the target organization. In the content were instructions designed to tell the AI to search for sensitive terms and load the results into an external image URL controlled by the attacker.
After an employee activated Gemini Enterprise through a search, the system used its retrieval system to gather relevant content – pulling the attacker’s document into its context, interpreting the instructions as legitimate queries and executing them across all Workspace data sources it had permission to access.
Finally, Google Gemini included the attacker’s external image tag in its output. When the browser attempted to load that image, it sent the collected sensitive information directly to the attacker’s server through a single HTTP request.
Google did not filter the HTML output, which allowed the embedded image tag to trigger a remote call to the attacker’s server when loading the image, said Sasi Levi, security research lead at Noma Security. The URL contained the exfiltrated internal data discovered during searches. Levi said the researchers were able to successfully exfiltrate lengthy emails, though researchers didn’t verify the maximum payload size.
The GeminiJack vulnerability represents a classic example of an indirect prompt injection attack, Levi told Information Security Media Group. Detection requires comprehensive inspection of all data sources feeding the agent’s context including tool outputs, retrieval-augmented generation data and other external inputs, he added.
“This type of attack will not be the last one of its kind. It reflects a growing class of AI-native vulnerabilities,” Noma researchers wrote.
The vulnerability exploited the trust boundary between user-controlled content in data sources and the AI model’s instruction processing. Levi said retrieval-augmented generation-based enterprise AI systems are vulnerable because they blend trusted instructions with untrusted retrieved content and let the model act with broad privileges. That’s also the root cause behind ForcedLeak, a flaw Noma discovered in Salesforce Agentforce. “The fix is architectural: systems must enforce strict boundaries between instructions and evidence, attach provenance and trust levels to every retrieved item, and prevent untrusted content from rewriting goals or triggering high-impact actions,” he said.
