3rd Party Risk Management
,
Artificial Intelligence & Machine Learning
,
Governance & Risk Management
Growing Third-Party Breach Trend Is Spreading to AI Suppliers

When reports of Korean Air losing sensitive data on tens of thousands of employees surfaced, the incident was initially seen as a routine data breach.
See Also: On-Demand | NYDFS MFA Compliance: Real-World Solutions for Financial Institutions
But reports soon indicated the exposure stemmed from a supply chain attack on a catering vendor responsible for in-flight meals and duty-free retail operations. But the vendor was running Oracle E-Business Suite, which contained a critical-severity vulnerability tracked as CVE-2025-61882. The flaw was discovered in early October 2025, after several enterprises reportedly received emails from attackers claiming to have already exploited the flaw to gain access and steal data.
It wasn’t a failure of Korean Air’s core IT environment – the breach came from a trusted upstream system. This distinction matters because it mirrors how supply chain risks from third-party software – and now artificial intelligence platforms – are manifesting in large enterprises, including critical infrastructure sectors such as airlines.
From Software Dependencies to Intelligence Dependencies
Technology supply chains haven’t been foolproof. Typically, IT organizations list vendors, map dependencies, negotiate contractual controls and apply third-party risk management frameworks. When breaches occurred, the blast radius was usually bounded by relatively static software relationships.
Fast forward to the AI era, and that operating model is being dismantled. Modern AI environments are built on dynamic external foundational models, countless APIs, open-source components, continuous data pipelines across internal and external systems, and default AI capabilities. These dependencies are not merely technical. They shape how decisions are made, automated and scaled across the organization.
The Korean Air breach is a perfect illustration of what happens when operational reliance outpaces visibility. The catering vendor was deeply embedded in airline operations. When Oracle E-Business Suite failed upstream, the risk flowed downstream. AI can introduce the same dependency structure, but with far higher velocity. Let’s accept that enterprises are no longer consuming software only from suppliers. They are also importing intelligence, decision logic and reasoning into their core workflows. It’s no exaggeration to call it an intelligence supply chain that behaves very differently under stress than the software supply chain.
Why AI Supply Chain Risk Is Harder to Contain
AI supply chains differ from traditional software supply chains because they are dynamic. Data flows continuously rather than episodically. APIs abstract complexity while obscuring provenance. More often than not, AI features are introduced implicitly within SaaS platforms without much architectural review.
A minimal enterprise visibility persists. A majority of organizations lack reliable methods to determine where large language models are running within their environments, which models are invoked by which applications, and the data consumed by the models. If not addressed promptly, it could turn into a governance nightmare.
Security experts describe the current state of AI model lineage and dependency tracking as the “Wild West.” What does it mean? Traditional constructs, such as software bill of materials, weren’t designed for continuously evolving models and probabilistic systems. Without a thorough knowledge of models, APIs and datasets, everything is a guess or a speculation.
Gartner: Strategy Is Lagging in Dependency
Against this backdrop, Gartner found that only 23% organizations have a formal AI strategy. A well-defined AI strategy means organizational caution and early-stage maturity. But the Gartner data indicates AI adoption is outpacing AI governance.

Most enterprises, if not all, are adopting AI incrementally and often invisibly. Their approach is to initiate pilots for specific use cases, embed AI features in SaaS platforms, and enable developer-driven integrations. A few may also conduct rapid experimentation with external APIs. What’s missing from this approach is an architectural and governance framework that treats AI as a supply chain rather than just a tool. This has resulted in data pipeline sprawl without clear ownership. Responsibility for AI risk is diffused: It belongs to all including IT, security, data teams and even business stakeholders. The Korean Air breach shows how these dynamics play out in conventional software ecosystems. AI supply chain failures seem to result in far greater systemic impact.
Are Data Pipelines the Least Governed Asset?
The data pipeline is the single most significant AI supply chain risk. AI systems derive their value from the data they ingest for training, inference and continuous learning. Yet many organizations still treat data as a static asset.
These data pipelines span internal operational systems, partner systems, third-party APIs, and even external AI services, and they may pose security, privacy and trust risks. A compromised data pipeline doesn’t merely cause a breach. It also alters decision outcomes, biases automated actions and undermines trust in downstream systems.
Many AI-related security incidents have been observed in real-world scenarios, occurring in environments that lack even basic AI-specific access controls. Supply chain failures are no longer theoretical. They are operational failures that often go undetected.
Why CIO Accountability Is Inevitably Expanding
Signals such as supply chain breaches, limited AI visibility and the absence of a formal strategy are compelling enough to redefine the CIO’s role. Mere assurances about innovation velocity or isolated pilot successes no longer convince boards. They are asking more fundamental questions about exposure, dependencies and resilience.
Board members are now asking risk-related questions such as:
- Which external AI models and services are embedded in core operations?
- What kind of data is consumed, and where does it flow?
- How is AI usage governed across applications, APIs and development teams?
- What would the organization’s exposure be if an AI supplier were compromised, constrained or regulated?
Since AI spans infrastructure, data and applications, CIOs are uniquely positioned to understand and answer these questions. And CIOs must recognize that AI supply chain resilience cannot be crowdsourced.
Design for Resilience
The Korean Air incident is a cautionary tale for CIOs, even though it didn’t involve an AI supply chain breach. The breach didn’t originate where risk models expected it to. It emerged upstream, from a trusted system and spread downstream with little resistance.
AI introduces more upstream dependencies than any previous enterprise technology. Most organizations are unprepared to manage these dependencies. The question for CIOs, therefore, is not whether AI supply chains will exist but will they be properly designed, governed and monitored?
