Agentic AI
,
Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Backdoored NPM Module Sent Sensitive Mail Copies to Threat Actor

A very patient hacker hooked victims by building a reliable tool integrated into hundreds of developer workflows that connects artificial intelligence agents with an email platform. The unidentified software engineer published 15 “flawless” versions until he slipped in code copying users’ emails to his personal server, say researchers from Koi.
See Also: AI Agents Demand Scalable Identity Security Frameworks
Version 16 of the package introduced a hidden BCC instruction at line 231 of the code, sending copies of all emails to phan@giftshop.club
.
The incident is an extreme version of a rash of supply chain hacks occurring on the npm JavaScript runtime environment code repository in which hackers upload malicious but legitimate-seeming packages in the hopes that victims fold them into their own applications (see: Shai Hulud Burrows Into NPM Repository).
“Maybe the developer hit financial troubles. Maybe someone slid into his DMs with an offer he couldn’t refuse. Hell, maybe he just woke up one day and thought ‘I wonder if I could get away with this,'” wrote Koi’s Idan Dardikman.
The package, postmark-mcp
, is likely the first publicly documented instance of a poisoned model context protocol server. MCP servers connect AI applications such as ChatGPT or Claude with external services and databases. In this case, postmark-mcp
enabled AI assistants to send transactional emails through Postmark, including password resets, account confirmations and billing notices. These servers often run with broad permissions, so a single modification can create systemic risk.
Koi estimated that around 1,500 organizations downloaded the package. Even if only a fifth of them deployed it, about 300 organizations may still be routing sensitive emails to the attacker, and depending on usage, the number of compromised emails could range into the thousands daily. “I’m talking password resets, invoices, internal memos, confidential documents – everything,” Dardikman wrote.
“We literally handed him the keys, said, ‘Here, run this code with full permissions,’ and let our AI assistants use it hundreds of times a day. We did this to ourselves,” Dardikman said of the developer, who he only identified as hailing from Paris and having a history of using his real name on a GitHub profile filled with legitimate projects.
The hacker took legitimate code from his GitHub repository, added the BCC line and published the package to the npm repository using the same name as the clean version. The package has been removed from npm, but systems that installed it remain at risk.
Thousands of organizations use MCP servers to streamline AI integrations, often without the same vetting applied to other enterprise software. Researchers at Tenable and JFrog also recently reported critical flaws in MCP components that could allow remote code execution on developer machines.
“Stay paranoid. With MCPs, paranoia is just good sense,” Dardikman said.