3rd Party Risk Management
,
Artificial Intelligence & Machine Learning
,
Governance & Risk Management
Playbook Aims to Help Healthcare, Public Sector Manage AI Vendor Security Gaps

A deluge of artificial intelligence embedded into software and devices means a new horizon of cyber rusks for the healthcare sector. The Health Sector Coordinating Council on Wednesday released guidance to help the sector better manage AI third-party risk concerns.
See Also: Uncertainty, Undone: A 2026 OT/IoT Cybersecurity Strategy for Converged Environments
From AI-enabled remote patient monitoring to electronic health record systems containing natural language processing engines, the healthcare sector now finds itself relying on critical functions powered by AI tools. But third-party security, data governance practices and model integrity are difficult to verify, the organization said.
Risk is compounded by complex layered healthcare supply-chains that include subcontractors, offshore developers and open-source technology.
The 109-page guide provides advise to manage AI-supply chain related risks, said Samantha Jacques, vice president of clinical engineering at McLaren Health and vice chair of the HSCC cybersecurity working group that produced the document.
“This guide is useful for all size organizations and they can use the pieces and parts that work for their organizational processes, or adopt the entire process as a whole,” said Jacques, a co-lead in the development of the guidance. “Each organization is at a different place for AI adoption and this guide is meant to enhance them, wherever they are in their journey.”
The vast majority of healthcare organizations have partnered with a third party to design and implement AI solutions, said Rob Suarez, vice president and CISO at insurer CareFirst BlueCross BlueShield. “While we move at light-speed, these are in fact still early days of AI,” said Suarez, a contributor to the new guide.
“Healthcare organizations require clear answers to what third party suppliers of AI are and are not doing; and how we can collectively protect patients, their health and financial wellbeing,” he said. “We can’t protect what we don’t know,” he said. The healthcare industry must understand and clarify with AI vendors if patient health information is used and how the risks managed, he said.
The guide draws from established frameworks, including the National Institute of Standards and Technology’s AI Risk Management Framework and the voluntary Health Industry Cybersecurity Practices established by HSCC and the U.S. Department of Health and Human Services, Jacques said.
The playbook offers tactical guidance for governance, risk and compliance practices that CISOs and security teams can implement, addressing related issues involving AI tech – such as patching and legacy product concerns, she said.
Compliance teams also can use the guidance for recommended business associate agreement AI tips; legal and supply chain teams can tap the guide’s AI contract terms AI governance leadership teams within organizations can use the training curriculum to help train users, she said. The guide is aimed to be useful for the entire AI-supply chain process, from procurement through implementation and de-installation at end of life, she said.
In addition to the AI supply chain guidance, HSCC also published a companion glossary of AI cyber terminology and definitions for healthcare clinical, operational, compliance, and technical stakeholders.
“Healthcare organizations benefit from clarity on the terminology we use in the world of AI and what that means in the context of these other healthcare oriented AI resources,” Suarez said.
