AI-Based Attacks
,
Artificial Intelligence & Machine Learning
,
Fraud Management & Cybercrime
New Non-Binding Recommendations Target Medical Device Makers, Software Developers
Manufacturers are eager to incorporate artificial intelligence and machine learning technologies into a wide range of medical devices, from cardiac monitors that can spot developing heart problems to medical imaging systems that can find malignancies a radiologist might miss.
See Also: Maximizing data utility in mission delivery, citizen services, and education
The Food and Drug Administration FDA has approved more than 1,000 devices that incorporate AI and ML – most within the past five years – and in new draft guidance released Tuesday, the agency emphasized the need to address cybersecurity issues in both pre-market submissions to the agency and in the lifecycle management of AI-enabled medical products.
The non-binding FDA draft guidance, “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations,” is open for public comment until April 7.
The document covers a wide range of pre-market and lifecycle management considerations that apply to developers and manufacturers of medical devices with one or more AI-enabled device software functions that incorporate machine learning, deep learning and neural networks, as well as other types of AI.
Cybersecurity is an important theme of considerations covered by the 67-page document, which includes a chapter on cyber issues.
“As with any digital or software component integrated into a medical device, AI can present cybersecurity risks,” the FDA writes.
The FDA already has general recommendations to all medical device makers for designing and maintaining cybersecurity, as well as providing relevant cyber details to the agency in premarket submission, was previously provided in the FDA’s 2023 guidance document, “Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions” (see: FDA Finalizes Guidance Just as New Device Cyber Regs Kick In).
While the FDA said that broader, earlier guidance is also relevant to AI-enabled devices, the agency’s latest draft document drills down on cyber-related issues and how various cyberthreats can specifically affect AI-enabled products.
Those threats include:
- Data poisoning involving bad actors deliberately injecting inauthentic or maliciously modified data, potentially affecting outcomes in areas such as medical diagnosis.
- Model inversion and theft, in which threat actors intentionally use forged or altered data to infer details from or replicate models – and which potentially pose risks to continued model performance as well as intellectual property and privacy breaches;
- Model evasion, in which hackers intentionally craft or modify input samples to deceive models, leading them to incorrect classifications. These pose risks to the reliability and integrity of model predictions, potentially undermining trust in AI-enabled devices and exposing them to malicious exploitation;
- Data leakage involving the exploitation of vulnerabilities to access sensitive training or inference data in models;
- Overfitting, in which threat actors deliberately “overfit” a model, exposing the AI components to adversarial attacks as these components struggle to adapt effectively to modified patient data;
- Model bias, in which threat actors manipulate training data to introduce or accentuate biases. The hackers could exploit known biases using adversarial examples, embed backdoors during training to later trigger biased behaviors, or leverage pre-trained models with inherent biases, amplifying them with skewed fine-tuning data.
- Manipulation could lead to “model performance drift” by changing the underlying data distribution, which can degrade model performance.
Model performance drift and other cyberthreats “could slightly shift the input data over time or exploit vulnerabilities in dynamic environments, causing the model to make inaccurate predictions or become more susceptible to adversarial attacks.”
Critical Considerations
The FDA’s draft guidance raises important issues for AI-enabled device makers and their users, some experts said.
“AI poses a unique set of risks,” said Itamar Golan, co-founder and CEO of Prompt Security, and a core member of Open Web Application Security Project Top 10 for large language model-based apps.
As described in the FDA guidance, these risks can range from data poisoning and prompt injection to overfitting, model biases and more, he said. “While these risks apply to almost any industry and organization using AI, in a high-stakes environment like healthcare, the consequences can be devastating,” he said.
“Imagine a medical device using an LLM trained on a specific setup that could trigger it to produce manipulated outputs based on certain inputs,” he said. For example, consider a pacemaker that relies on an LLM, receiving data from both the body and the cloud, he said. “If this LLM were poisoned during training, it could behave maliciously – such as reacting badly to a cloud-delivered string with terms like ‘male,’ ‘Jewish,’ or ‘American.’ This is not theoretical but a real attack scenario.”
The FDA draft guidance also advises makers of AI-driven devices to provide the agency with premarket submission details and develop mitigation and management plans to address those and other cyber-related risks and threats.
It’s critically important for makers of AI-enabled devices to address these and other cyber issues throughout the development and product lifecycle, said Dave Perry, manager of digital workspace operations and digital solutions at St. Joseph Healthcare in Ontario.
“Patient safety and data protection should be the priority of all parties involved in the patient journey, and the addition of AI at this early stage only increases this importance and risk,” he said.
“AI security is still in early stages, and shrinking budgets make this an even more daunting ask for IT departments. Vendors of AI medical devices need to ensure security is a priority from design to market to maintain brand integrity and survival in a market that is quickly heating up,” he said.
Furthermore, “medical devices rarely receive the updates they need to onboard software,” he said. “Even if available, communication of these important updates often goes unnoticed or due to staffing is a lower priority,” he said. “Outdated AI models that are susceptible to, for example, prompt injection and not being updated, present as great a risk to the data they hold as a traditional open port or simple password.”
Looking Ahead
While the FDA’s draft guidance is not binding on device makers, the document explains the agency’s current thinking on submissions for AI-enabled medical devices, said regulatory attorney Betsy Hodge of the law firm Akerman.
“It’s always helpful to understand how regulators approach an issue,” she said. “The draft guidance also reminds device makers – and device deployers – to adopt a comprehensive approach to the product lifecycle to best ensure the safety and effectiveness of the device,” she said.
For example, AI-enabled devices may have different sensitivities to input data during the development phase compared to input data used when the device is deployed in the real world, she said.
“It is imperative for AI-enabled medical device makers to consider cybersecurity issues in the lifecycle of their products because cybersecurity threats can compromise the safety and/or effectiveness of a device, potentially resulting in harm to patients.”
Regulatory attorney Linda Malek of law firm Crowell & Moring LLP said that while not binding, the FDA’s draft guidance is “a helpful roadmap” to assist medical device makers in protecting their devices from cyberthreats that could affect the safety and utility of their products generally.
“It makes good business sense for companies to consider and implement many of these recommendations from that perspective,” she said.
Still, because of the upcoming change in U.S. Presidential administrations, “there will likely be delays in and/or significant modifications to this draft guidance, so it is difficult at this time to anticipate what final guidance might look like,” Malek said. Nonetheless, the draft guidance “is an important step forward in the protection of AI-enabled medical devices,” she said.