Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Companies Commit to Risk Management, Making Care More Affordable
More than two dozen healthcare organizations on Thursday signed a White House pledge committing them to responsible deployment of artificial intelligence in a bid to improve health outcomes for Americans while protecting their security and shielding patients against bias.
See Also: Entering the Era of Generative AI-Enabled Security
The voluntary commitment from 28 healthcare providers is a “critical step” in the Biden administration’s bid to harness the technology’s promise of advancing healthcare while taking preventive steps against its potential peril, the White House said.
“The administration is pulling every lever it has to advance responsible AI in health-related fields,” a White House official said.
The healthcare pledge builds on earlier commitments from leading AI and technology companies to develop models responsibly and pledges to invest in AI model cybersecurity, red-teaming against misuse or national security concerns, and accepting vulnerability reports from third parties.
The Department of Health and Human Services is already in the process of developing frameworks, policies and potential regulatory actions to responsibly deploy AI as mandated by Biden’s October executive order on AI (see: Biden’s Executive Order on AI: What’s in It for Healthcare?).
Without proper testing, risk mitigations and human oversight, AI-enabled tools used for clinical decisions “can make errors that are costly at best – and dangerous at worst,” the White House said.
Market research predicts the healthcare generative AI market will be worth more than $30 billion annually in the next decade.
Widespread adoption of AI in healthcare settings still may face an uphill battle due to worries about computer-generated mistakes, as 6 in 10 Americans say they would feel uncomfortable if their own health care provider relied on AI to diagnose diseases or recommend treatments.
As part of the Thursday voluntary pledge, healthcare companies said they will inform users when they receive content that is largely generated by AI and not reviewed or edited by humans. When using applications powered by foundation models, they said, they would comply with a risk management framework that will help monitor and address the apps’ potential harms.
The companies – which include pharmacy and wellness chain CVS Health, insurer Premera Blue Cross and hospital chain Allina Health – said they will also use AI to improve health outcomes and make access to care more affordable and equitable as well as reduce clinician burnout.
The list of pledgees doesn’t include Google, which only yesterday touted a set of healthcare-specific AI models meant to summarize doctor-patient conversations or clinical documents and automate claims processing. Google’s efforts have attracted official attention, as Sen. Mark Warner in August called on Google increase transparency, protect patient privacy and ensure ethical guardrails, following reports of inaccuracies.
“I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors,” the Virginia Democrat wrote.
AI is only as good as its training data: If it is not trained on data that accurately represents the population it looks to treat, its diagnoses can be discriminatory in terms of gender and race. A 2019 study found that an algorithm deployed in hospitals to identify high-risk patients required Black patients to be far sicker than their white counterparts before being flagged as high-risk. The study authors attributed the bias to historical data showing that the healthcare industry spends less money on Black patients, leading the algorithm to falsely conclude that Black patients are healthier.
A recent survey by the American Hospital Association shows that healthcare executives believe it is more likely than not that by 2028 a federal regulatory body will determine that AI for assisted diagnosis and personalized care is safe for use in medical settings.
Clinical decision tools are one of AI’s most promising applications in healthcare, the association said – along with diagnostic scanning of images – a use case that’s already been underway for years now.
Consulting firm McKinsey in July wrote that automation and analytics generative AI could eliminate between $200 billion to $360 billion of spending in healthcare.