Artificial Intelligence & Machine Learning
,
Healthcare
,
Industry Specific
OpenAI: Tool Will ‘Securely’ Connect With Medical Records, But How Will That Work?

OpenAI claims that more than 230 million people worldwide each week ask large language model ChatGPT health and wellness-related questions, sometimes by uploading their own medical information. Now the company said it is rolling out a new iteration of ChatGPT dedicated to health that will also “securely” connect users’ medical records and wellness apps to better personalize responses.
See Also: Maximizing data utility in mission delivery, citizen services, and education
OpenAI said ChatGPT Health will operate as a separate space with “enhanced privacy to protect sensitive data.”
“Conversations in Health are not used to train our foundation models. If you start a health-related conversation in ChatGPT, we’ll suggest moving into Health for these additional protections.”
Currently, conversations and files across ChatGPT are encrypted by default at rest and in transit as part of its “core” security architecture, OpenAI said. But, “due to the sensitive nature of health data, Health builds on this foundation with additional, layered protections – including purpose-built encryption and isolation – to keep health conversations protected and compartmentalized.”
Users can also bolster their access controls by choosing to enable multifactor authentication, the company said.
OpenAI has started a wait list for ChatGPT Health, but so far the company has not announced a general public rollout date.
OpenAI did not immediately respond to questions about how the company plans to secure data involving third-party medical records and wellness apps that OpenAI said users will be able to link with ChatGPT.
Some privacy and security experts are wondering similar things. Skip Sorrels, field CISO and chief technology officer at security firm Claroty, said third-party interactions involving ChatGPT Health – as well as similar types of artificial intelligence products – present an assortment of risk.
“Features like custom actions or web searching may result in personal information being sent to the third-party applications. Once shared, the data is governed by the third party’s own privacy policies rather than OpenAI’s,” said Sorrels, former director of cybersecurity at Ascension Health.
Patients should be deliberate about what they connect and share, use strong account protections and understand the permissions they grant to ChatGPT Health, warned Eran Barak, CEO of data loss prevention firm Mind.
Healthcare providers should establish clear policies, limit unnecessary data exposure and ensure they have visibility into how AI tools like these are being used “so innovation doesn’t outpace security,” he said.
“Ultimately, it’s the patient’s choice to share their information, but ChatGPT Health is responsible for protecting PHI by design, ensuring sensitive health data is minimized, isolated and never used beyond its intended purpose, especially with third-party vendors,” Barak said.
Attorney Andrew Crawford, senior policy counsel at the Center for Democracy & Technology, said he has many questions regarding how OpenAI will collect, use and store peoples’ health data.
“For example, while OpenAI says that it won’t use information shared with ChatGPT Health in other chats, it has also indicated it is exploring advertising as a business model,” he said. “I want more clarity and details ensuring users that their health data, or insights learned from it, will not be used to profile them for advertisers,” he said.
Another concern is potential government access to health data, Crawford said. “If law enforcement requests OpenAI turn over a user’s reproductive health data for an investigation, will OpenAI comply? Will they notify the user that their data has been turned over to law enforcement?” he said.
Dr. ChatGPT?
Long before ChatGPT came along, “people were entering sensitive healthcare information in Google searches and then getting retargeted ads based on their search history,” said Van Steele, who leads the cybersecurity consulting service practice of consulting firm LBMC.
“From that lens, this shift is arguably more secure than those use cases, so this is a good thing overall,” he said. “Where my perspective sharpens is around the broader AI ecosystem. I tend to fall back to the old reliable principle: Nothing is free,” he said.
Sorrels said he’s also concerned about the quality of information generated by tools like ChatGPT Health, especially when it involves health issues. “When an AI device moves toward specifying a disease or medical diagnosis – that should be left to the healthcare practitioners who have trained extensively to perform that role.”
Hallucinations are a universal concern across all generative AI interactions, not something unique to health-focused use cases, Barak said.
“That said, with the right guardrails and content controls, tools like this could expand access to generally accurate, personalized health information and may be safer than patients relying on unvetted internet searches to answer specific healthcare questions.”
OpenAI’s announcement about ChatGPT came on the same day the U.S. Food and Drug Administration issued guidance indicating that the agency plans to take a hands-off regulatory approach with certain AI-enabled devices and clinical decision support software (see: FDA Takes Hands-Off Approach to AI Devices, Software).
“The FDA is attempting to draw a clean line between systems that simply collect and output data and those that refer to a specific disease or medical condition,” Crawford said. “I’d argue that line is anything but clear, especially when it comes to large language models and health,” he said.
“ChatGPT Health seems to be designed to specifically work in the gray space between simple data collection and software that is intended to provide decision support for the diagnosis, treatment, prevention, cure, or mitigation of diseases or other condition,” he said.
People can use ChatGPT Health to combine and analyze lots of different health information from multiple sources and yield actionable health insights, he said, which has both potential pros and cons.
“While ChatGPT says ‘Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment,’ I can’t imagine every ChatGPT Health user will talk with their doctor before they act upon some of the models’ suggestions – particularly since the company recently emphasized how many people live in hospital deserts and may lack ready access to care.”
