Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
MCP Server Paused for Days After Bug Risked Data Leakage Between Users

Asana patched a vulnerability in an artificial intelligence integration feature that could have allowed users to view data from other organizations. It paused the use of Asana Model Context Protocol for nearly two weeks to apply the fix.
See Also: Taming Cryptographic Sprawl in a Post-Quantum World
The time management company discovered the flaw in its implementation of MCP, an open-source framework that enables AI systems to interact with external data sources such as messaging platforms and enterprise applications. It disabled its MCP server between June 5 and June 17.
AI giant Anthropic introduced MCP in last November to support use cases where language models and AI agents interface with structured enterprise information (see: AI Giants Adopt Anthropic’s Standard to Connect Apps, Agents).
Asana launched its integration on May 1 in a bid to allow customers to query project data across platforms using natural language and third-party AI apps.
The MCP server flaw could also have “potentially exposed certain information from your Asana domain to other Asana MCP users,” according to a message form Asana posted onto social media network X. The company did not specify how many customers were affected or whether any data was actually viewed by unauthorized parties.
Asana took the MCP server offline the day after the flaw’s June 4 discovery. Asana said Tuesday the feature was restored, though users who had previously enabled the integration would need to reconnect, again.
“If your organization was using the MCP server and was impacted by this issue, we have already reached out to you directly with important details and next steps,” the company said. “As part of our remediation efforts, we reset all connections to the MCP server.”
Cybersecurity firm UpGuard said enforcing “strict tenant isolation and least-privilege access” to limit how much data an AI system can see or interact with is key to prevent such issues. It also advised organizations to log all LLM-generated queries to support future audits and investigations.