Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Thousands of MCP Servers Leave AI Apps Open to Attack Surfaces

Hundreds of model context protocol servers are misconfigured and publicly accessible, creating entry points for attackers to compromise artificial intelligence applications, researchers have found.
See Also: AI vs. AI: Leveling the Defense Playing Field
More than 15,000 MCP servers are already deployed globally, despite the protocol only emerging in November, found researchers at Backslash Security. MCP servers connect AI models to data beyond their original training sets, often sensitive information stored on organization servers. As AI use has surged, so has MCP adoption.
Of the roughly 15,000 MCP servers identified, researchers found around 7,000 exposed to the public internet. Some companies intentionally make servers publicly accessible to share non-sensitive information, but most MCP deployments are expected to stay behind authentication controls. In many cases, these safeguards were missing.
A subset of these exposed servers presented severe risk: hundreds of accepted, unauthenticated connections from any device on the same local network, a scenario researchers call “neighborjacking.” By itself, such access is not necessarily catastrophic, but in combination with deeper flaws, it can lead to major compromises.
Among the 7,000 exposed serves, about 70 servers contained critical vulnerabilities. These included path traversal flaws and failures to sanitize user input. One example showed an MCP accepting any incoming input and executing it as a shell command, granting attackers the ability to run arbitrary code on the host system.
“Our analysis did not yield obviously malicious MCPs, we did find a startling number of dangerously misconfigured or carelessly built servers,” researchers said. When neighborjacking coincides with insecure input handling, attackers can escalate to full system takeover. Intruders could delete data, run their own code, or take control of a system.
In addition to direct compromises, MCP servers can be used for context poisoning, manipulating the data supplied to large language models to skew their outputs. Backslash researchers encountered one company using an MCP that served tens of thousands of users but had not implemented defenses against such tampering.
The technology behind MCP is new enough that security practices are underdeveloped, researchers said, attributing many of the misconfigurations to teams moving quickly without fully understanding the security implications.
For organizations already relying on MCP servers, Backslash recommends steps to reduce risk, including scanning for insecure IDE plugins and misconfigured AI rules and ensuring only approved language models are connected. It also recommends implementing strict API access controls and verifying the origin of any data supplied to models to help prevent data poisoning.