AI Dependency Attack Reportedly Exposes Data and Source Code

Artificial intelligence recruiting firm Mercor said it was compromised by the LiteLLM supply chain attack, making it the first confirmed downstream victim.
See Also: Gen AI Stalls, Shadow AI Rises: A CISO Concern
The Mercor breach stemmed from malicious versions of LiteLLM, a widely used LLM gateway, which had credential-stealing malware injected into its distribution. Because LiteLLM sits at a central integration point in AI systems, its compromise created a high-leverage attack vector affecting a large number of organizations simultaneously. LiteLLM routes requests between apps and more than 100 LLM providers.
“We recently identified that we were one of thousands of companies impacted by a supply chain attack involving LiteLLM,” Mercor posted to X late Tuesday.
Mercor said its security team acted quickly to contain and remediate the incident and launched a forensic investigation with third-party experts.
“We will continue to communicate with our customers and contractors directly as appropriate and devote the resources necessary to resolving the matter as soon as possible,” Mercor posted on X.
The malicious LiteLLM packages were designed to steal credentials such as API keys, cloud secrets and tokens, which are then used to access internal systems. This approach reflects a broader trend in supply chain attacks in which the goal is not immediate disruption but quiet access that can be reused across systems (see: LiteLLM Hit in Cascading Supply-Chain Attack).
What Data Was Exposed Inside Mercor’s Environment?
In Mercor’s case, stolen credentials were reportedly used to access internal infrastructure, leading to large-scale data exfiltration including source code and sensitive datasets. Attackers reportedly used the stolen credentials to move laterally across internal systems, gaining deeper access into infrastructure, repositories and storage environments and stealing 4 terabytes of data. The scale suggests attackers had sustained access and were able to systematically extract high-value assets.
“Wow. Incredible amount of SOTA training data now just available to China thanks to @mercor_ai leak,” Y-Combinator president and CEO Garry Tan wrote on X. “Every major lab. Billions and billions of value and a major national security issue.”
The reported Mercor data potentially exposed includes source code repositories, internal databases and cloud storage buckets containing operational data such as videos and verification workflows. LiteLLM is downloaded millions of times daily, and compromised versions were pulled tens of thousands of times, making the blast radius extremely large across the AI ecosystem.
“LAPSUS$ Group is allegedly selling a massive dataset of http://Mercor.com, an AI recruiting platform with $500M+ revenue, is being auctioned on a popular cybercrime forum, TG, and their website,” @DarkWebInformer wrote on X.
The LiteLLM compromise is part of a broader campaign tied to earlier attacks on tools such as Trivy and KICS, indicating a coordinated effort to poison widely trusted developer tools. Attackers reused stolen credentials across platforms, showing how one compromised tool can create a domino effect across the software supply chain.
Because LiteLLM sits in the execution path between applications and model providers, it effectively becomes a gateway to APIs, tools and data flows. When compromised, it effectively becomes a centralized point through which attackers can access a wide range of interconnected systems.
Researchers estimate that more than 1,000 SaaS environments and potentially hundreds of thousands of machines have been affected by related supply chain attacks. Experts warn that the number of affected organizations could grow significantly as investigations continue and additional victims come forward.
