3 Major Tech Firms Shipped Vulnerable Open-Source Tools to Hugging Face

Vulnerabilities in three artificial intelligence libraries could allow attackers to execute malicious code by loading a compromised model file. The flaws affect open-source tools created by Apple, Salesforce and Nvidia that power models collectively downloaded tens of millions of times on Hugging Face.
See Also: On-Demand | NYDFS MFA Compliance: Real-World Solutions for Financial Institutions
Palo Alto Networks identified security issues in NeMo, Uni2TS and FlexTok, three Python libraries designed for AI research. The vulnerabilities let attackers embed code in the AI’s metadata that executes automatically when the model loads.
The three libraries all use Hydra, a configuration tool maintained by Meta that’s popular in machine learning projects. Each library calls Hydra’s instantiate function to load settings from model metadata without properly checking the input first.
Palo Alto Networks found no evidence of attackers exploiting these vulnerabilities, as of last month. The company notified the affected vendors in April 2025, giving them time to fix the issues before going public. The vulnerability appears to have existed since at least 2020.
Nvidia tracks the flaw as CVE-2025-23304 for its NeMo library and rated it high severity. The company released a fix in NeMo version 2.3.2. Salesforce tracks the flaw with CVE-2026-22584 for its Uni2TS library and also rated it as high severity, deploying a patch on July 31, 2025. The researchers behind FlexTok updated the code last June.
The vulnerabilities lie in an oversight in how the libraries use Hydra’s instantiate function. Maintainers intended to use the function only for creating instances of their own classes, but they overlooked that instantiate accepts any callable function, not just class names.
An attacker can exploit this by substituting built-in Python functions such as eval or os.system. In proof-of-concept tests, researchers used builtins.exec to achieve remote code execution.
Meta has since updated Hydra’s documentation to warn that remote code execution is possible when using instantiate. The company added a block-list mechanism to check target values against dangerous functions before execution. The protection can be bypassed through indirect imports. As of this month, the block-list mechanism is unavailable in a Hydra release.
Nvidia began developing NeMo in 2019 as a generative AI framework. The library uses custom file formats that are TAR files containing a metadata file alongside model weights. When NeMo loads these files, it passes the metadata directly to Hydra’s instantiate function without checking it first.
More than 700 models on Hugging Face use the NeMo format. Many rank among the platform’s most popular offerings, including Nvidia’s parakeet model. Nvidia fixed the issue by adding a function that validates configurations before execution. The function checks values against an allow list of approved packages from NeMo, PyTorch and related libraries. It also verifies that imports match expected classes and modules.
Prior to Palo Alto’s findings, there was no indication that these libraries could be insecure. The research team identified more than 100 different Python libraries used by models on Hugging Face in October 2025, with almost 50 using Hydra. These formats may be secure independently, but the code consuming them presents a large attack surface.
Developers commonly create variations of popular models with different configurations, often from researchers unaffiliated with reputable institutions. Attackers need only create a modification of an existing popular model with a claimed benefit, then add malicious metadata.
