ChatGPT Maker Probes Third-Party Data Breach; OpenAI API Users’ Information Exposed

Artificial intelligence research and development giant OpenAI has paused using analytics provider Mixpanel after it reported suffering a data breach that exposed profile information. The third-party service provider’s breach affects developers and organizations that use OpenAI’s API services.
See Also: Going Beyond the Copilot Pilot – A CISO’s Perspective
“The incident occurred within Mixpanel’s systems and involved limited analytics data related to some users of the API. Users of ChatGPT and other products were not impacted,” said OpenAI in a Wednesday breach notification.
ChatGPT maker OpenAI used Mixpanel to gather analytics to help it understand how customers interacted with its API tools.
“This was not a breach of OpenAI’s systems. No chat, API requests, API usage data, passwords, credentials, API keys, payment details, or government IDs were compromised or exposed,” OpenAI said.
Mixpanel detected the breach on Nov. 9, after which it informed OpenAI that it was probing the attack, and that a threat actor “gained unauthorized access to part of their systems and exported a dataset containing limited customer identifiable information and analytics information,” OpenAI said.
The company removed Mixpanel from its production systems during its investigation, reviewed the dataset to determine the scope of exposure, and began directly notifying all affected organizations, administrators and users, it said.
The company said it’s found “no evidence of any effect on systems or data outside Mixpanel’s environment” and that it is continuing to watch for signs of any wider breach.
The company didn’t immediately respond to a request for comment about how many users and organizations it’s directly notifying about the breach.
Compromised data includes profile details associated with OpenAI platform accounts, such as names, email addresses, approximate locations, operating systems, browser information, referring websites, plus organization or user IDs associated with the account.
OpenAI said the primary risk to users will be social engineering and phishing attacks. “Since names, email addresses, and OpenAI API metadata – e.g., user IDs – were included, we encourage you to remain vigilant for credible-looking phishing attempts or spam,” the company warned customers.
OpenAI said customers don’t need to reset passwords but should treat emails containing suspicious links, attachments or requests for authentication information with extreme caution.
The disclosure follows broader industry scrutiny of third-party vendor security as AI providers scale their infrastructure. AI development pipelines often rely on external analytics, cloud APIs and open-source model components, creating new dependency points that attackers can exploit. A 2025 BitSight report warned that AI services increasingly push sensitive telemetry and model-related data into vendor ecosystems, raising the impact of breaches involving monitoring or analytics partners.
Gartner’s 2025 Hype Cycle for Supply Chain Strategy likewise said that as organizations embed AI deeper into operations, the security of supporting vendors becomes critical to the resilience of the AI stack itself.
The Mixpanel incident highlights how even trusted analytics tools can inadvertently leak sensitive data and thus must be continually monitored, said Mayur Upadhyaya, CEO at APIContext. “In a machine-first world, you can’t fix what you can’t see. Observability must extend across every API, webhook and third-party integration,” he told Information Security Media Group.
