API Security
,
Artificial Intelligence & Machine Learning
,
Cybercrime
High-Severity Flaw in LangChain’s AI Tooling Hub Now Patched

A high-severity vulnerability in an open-source framework that helps developers build artificial intelligence-powered applications could enable hackers to siphon sensitive data, cybersecurity researchers said.
See Also: On Demand | Global Incident Response Report 2025
A flaw in the LangSmith platform allowed attackers to embed malicious proxy configurations into public AI agents listed on its Prompt Hub, which is a shared library designed for developers to reuse prompts, agents and models. Noma Security researchers said that when users adopt these proxies, hackers can intercept victims’ sensitive data, including OpenAI API keys, prompts, uploaded documents, images and voice inputs.
“This newly identified vulnerability exploited unsuspecting users who adopt an agent containing a pre-configured malicious proxy server uploaded to Prompt Hub,” Noma Security researchers Sasi Levi and Gal Moyal said. The bug, which researchers christened AgentSmith, has a CVSS score of 8.8. LangSmith published a patch.
LangSmith is part of the broader LangChain ecosystem and gives developers visibility and control when creating and testing large language model-based applications. Platforms like LangSmith are regularly used for rapid prototyping and testing.
“Software repositories, such as Prompt Hub, will continue to be a target for backdoored or malicious software,” said Thomas Richards, infrastructure security practice director at Black Duck. “Until these stores can implement an approval and vetting process, there will continue to be the potential that software uploaded is malicious.” Richards told Information Security Media Group that users potentially affected by the attack should rotate their API keys and scrutinize logs for any suspicious behavior.
The attack vector exploited in this case – using a benign-looking AI agent as a delivery mechanism for a malicious proxy – potentially has implications beyond a single platform.
Eric Schwake, director of cybersecurity strategy at Salt Security, said the incident was a supply chain vulnerability embedded within the AI development lifecycle. “Malicious AI agents equipped with pre-configured proxies can secretly intercept user communications,” he said. This poses potentially serious risks to organizations, including unauthorized API access, model theft, leakage of system prompts and billing overruns, especially if the agent is reused in an enterprise.