Artificial Intelligence & Machine Learning
,
Healthcare
,
HIPAA/HITECH
HHS OCR Letter Also Reminds Entities That AI Tool Use Must Comply with HIPAA
Federal regulators are reminding healthcare providers, insurers and other regulated firms of their duty to ensure that AI and other emerging technologies for clinical decision making and patient support are not used in a discriminatory manner – and comply with HIPAA.
See Also: Maximizing data utility in mission delivery, citizen services, and education
U.S. Department of Health and Human Services’ Office for Civil Rights Director Melanie Fontes Rainer’s letter on Friday “encourages” all regulated entities to review their use of AI and similar emerging tech tools for clinical decision support to ensure compliance with Affordable Care Act regulations that issued last year.
Specifically, Section 1557 of the ACA prohibits discrimination on the basis of race, color, national origin, age, sex and disability in health programs or activities that receive federal financial assistance from HHS, health programs or activities established under the act, such as state-based health insurance exchanges and federally facilitated insurance exchanges.
Last May, HHS issued a final rule for Section 1557 under § 92.210 for general prohibition of discrimination.
While that rule took effect July 5, 2024, the regulation’s “affirmative requirements” will take effect on May 1. Those requirements call for regulated organizations to make reasonable efforts to identify and mitigate risks of discrimination, HHS OCR said in the letter.
“OCR encourages all entities to review their use of such tools to ensure compliance with Section 1557 and to put into place measures to prevent discrimination that will help ensure all patients benefit from technological innovations in clinical decision-making,” Fontes Rainer wrote.
The healthcare sector can take certain steps to address potential discriminatory issues involving their AI tool use, some regulatory experts suggest.
“Those using AI for clinical activities should considering examining the data used to train the AI algorithms to determine whether diverse populations are represented. Regulated entities can also perform audits and testing to determine the presence of any bias,” said regulatory attorney Jordan Cohen of the law firm Akerman.
But the use of third-party AI tools can make that more complex, he said. “At the present time, many covered entities rely on third-party models that are already at least partially trained. This can make it difficult to examine training data and the algorithms underlying the AI functionality,” he said. “As has been previously reported, AI models such as large learning models involve extremely complex statistical models that can make looking ‘inside’ quite difficult, even for the model’s creators.”
In addition to reminding regulated about their responsibilities related to non-discriminatory use of AI clinical support tools, Fontes Rainer also stressed that healthcare providers, health plans and healthcare clearinghouses must ensure their compliance with HIPAA rules with respect to their use of AI and other tools.
HHS OCR’s letter on Friday coincided with the release of a broader HHS document spelling out the department’s strategic plan for AI (see: Biden Administration Releases AI Strategic Plan).
The overarching objective of HHS’ 200-page AI strategic plan is “to catalyze a coordinated public-private approach to improving the quality, safety, efficiency, accessibility, equitability and outcomes of health and human services through the innovative, safe and responsible use of AI,” that document said.
HHS’ AI strategic plan also includes a chapter outlining cybersecurity issues involving AI, including how the introduction of AI can widen the threat landscape in healthcare.
But with the Trump administration coming into office on Jan. 20, it’s uncertain how HHS’ new leadership will support the strategic plan or enforce other regulations instituted by the Biden administration, especially in recent weeks and months.
Nonetheless, Cohen said it is important for healthcare organizations to at least consider and plan for compliance with non-discrimination requirements for AI.
“While not all providers currently leverage AI in a manner that implicates non-discrimination rules, it is likely that most, if not all, providers will eventually leverage such tools,” he said. “Compliance is important to reduce legal risk and also to ensure that AI tools are providing accurate and non-biased clinical care in the best interests of their patients.”