Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Hackers Could Exploit Bug on Replicate to Steal Data, Manipulate AI Models
Attackers could have exploited a now-mitigated critical vulnerability in the Replicate artificial intelligence platform to access private AI models and sensitive data, including proprietary knowledge and personal identifiable information.
See Also: A Fireside Chat with CyberArk’s Incident Response & Red Teams
Replicate enables companies to share and interact with AI models. They can browse existing models on the hub or upload their own. It also helps customers host private models by providing an inference infrastructure.
Exploitation of the vulnerability would have allowed unauthorized access to Replicate customers’ AI prompts and the results and potentially alter those responses, said Wiz researchers in a blog post. These actions result in the manipulation of AI behavior and compromise the decision-making processes of these models, which threatens the accuracy and reliability of AI-driven outputs. That can lead to untrustworthiness of the automated decisions and have “far-reaching consequences” for users that depend on the compromised models.
AI models are essentially code, and running untrusted code in shared environments can affect the customer data stored or accessible through all the involved systems.
In Replicate’s case, attackers could execute code remotely by creating a malicious container in a format proprietarily used to containerize models on Replicate. Wiz researchers did the same. They created a malicious container in the Cog format and uploaded it to the platform. With root privileges, they used it to execute code on Replicate’s infrastructure, allowing them to move laterally in the environment and eventually carry out a cross-tenant attack.
The flaw also highlights the difficulty of tenant separation in AI-as-a-service solutions, particularly in environments that run AI models from untrusted sources, the researchers said. The potential impact of a cross-tenant attack on AI systems is “devastating, as attackers may be able to access the millions of private AI models and apps stored within AI-as-a-service providers.”
The researchers discovered the vulnerability as part of ongoing research in which the security company discloses patched bugs of AI-as-a-service providers it has partnered with to test their platforms. Wiz researchers found last month a vulnerability on the Hugging Face AI platform that has an impact similar to the one discovered on Replicate.
The Wiz team suspected the code execution technique was a pattern in which organizations run AI models from untrusted sources, even when the models are essentially code that could potentially be malicious. “We used the same technique in our earlier AI security research with Hugging Face and found it was possible to upload a malicious AI model to their managed AI inference service and then facilitate lateral movement within their internal infrastructure,” the researchers said.
There is currently no sure-fire way to validate the authenticity of an AI model or scan it for security threats beyond regular code testing.