Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Workarounds for Advancing AI in Administrative and Clinical Uses
AI holds tremendous promise for both the administrative and clinical sides of healthcare, but obstacles still remain. One of the major hurdles is tied to patient privacy and the sharing of vast amounts of data needed to effectively tune AI models.
See Also: Entering the Era of Generative AI-Enabled Security
Even if patient data is de-identified, healthcare organizations often are still reluctant to share data for machine learning and Al efforts, said Peter McGarvey, director of the innovation center for biomedical informatics at Georgetown University, who moderated a panel discussion at an AI forum hosted by the institution and the World Bank on Tuesday.
But the healthcare industry can find workarounds for some of those privacy barriers, some panelists said.
One way of addressing data-sharing concerns is through “federated learning,” said Marius Linguraru, professor and endowed chair of research and innovation at Children’s National Hospital in Washington, D.C.
When AI models are trained, efforts often involve collecting data from several different medical institutions to amass a range of data from diverse populations and communities that each facility serves, he said.
“But sharing data to train the model can have a lot of hurdles,” Linguraru said. “So what federated learning does is instead of getting data from all these hospitals together into a bucket and training and getting the best model, it sends the model to travel to different hospitals,” he said.
“The model learns at each location and extracts some perimeters that we think preserve privacy, increase cooperation, reduce bureaucracy and reduce costs,” Linguraru said. “We’re getting better and better. The traveling modeling goes around and learns from every hospital – and then can retrain and work at the hospital,” he said.
In the federated model, algorithms are trained collaboratively without hospitals exchanging patient data, he said. “So there is a path forward.”
Nikhil Sahni, a partner in McKinsey and Co.’s healthcare practice, said sometimes medical institutions seeking to implement AI have exaggerated views about the amount and range of data needed to address a specific use case.
Many healthcare entities “are using data as the excuse of why we can’t deploy AI,” he said. “But let’s invert the question into ‘What is the AI-enabled use case you’re about to do?'” he said.
If the use case is aimed at “fixing the claims management system, there’s a lot of data you don’t need,” he said.
“Too often, we’re trying to build the perfect dataset to go after everything versus ‘What is the actual use case right in front of us?'” he said. Entities should ask themselves, for instance, “What are the 10 fields I need to do that?”
“In many use cases, we actually have the data in a data lake from many institutions that we’d already go after,” he said.
In those use cases, the data might tend to be more administrative, “but there’s lot of value on the administrative side when it represents 20% of spending in U.S. healthcare,” he said.
Administrative use cases also often don’t involve as many sensitive patient data privacy issues. “We tend to say we don’t have the data or the data is not clean. But when you actually step back and say, ‘What am I trying to fix? What are the fields I need?'” a lot of what’s necessary can be addressed much more rapidly, he said.
On the clinical side, AI and machine learning are already being used in some very specific ways, such as helping to predict and identify maternal health issues, such as preeclampsia in pregnant women, said Nawar Shara, director of the health research institute at MedStar, a large healthcare provider in the Washington, D.C. area.
“There is a lot of low-hanging fruit” that medical institutions can currently tap for applying AI to make improvements on both the clinical and administrative sides, Shara said.