Artificial Intelligence & Machine Learning
,
Data Privacy
,
Data Security
AMA Wants Privacy, Security AI Tool Protections, Especially in Mental Health

Patients throughout the United States are now sharing intimate details of their health and mental health conditions with artificial intelligence chatbots for advice on diagnoses and treatment – even before they see a doctor.
See Also: AI Is Transforming the Chief Data Officer Role
The American Medical Association says using these AI chatbots carries risks – including data privacy and security breaches – and the group is urging Congress to take action to protect patients from potential harm.
Among other requests, the largest U.S. professional association for physicians and medical students wants Congress to require “meaningful limits” on the collection and retention of sensitive information by AI developers; safeguards against unauthorized access and sharing of that information; clear user consent for using the data; and transparent disclosures to users that they’re interacting with AI technology, and not human beings.
The AMA in several letters sent Wednesday to Congressional leaders acknowledged that AI-enabled tools may help expand patient access to healthcare resources – including for mental health – but said the technologies “lack consistent safeguards” against serious risks, including emotional dependency, misinformation, inadequate crisis response, and data privacy and security compromises.
“It is important to recognize the potential value that well-designed, purpose-built AI tools can bring to mental healthcare when deployed responsibly,” the AMA wrote. But, “many individuals interacting with chatbots for mental health support understandably treat these interactions as private, even when the chatbot is not part of a healthcare system,” the AMA said.
“Chatbot conversations can be retained, logged, reviewed or inadvertently revealed, and the sensitivity of what individuals share is often greater than they would disclose in other online settings,” the AMA said.
“This gap between expectations and real-world data privacy is especially concerning for children and teens who may share highly sensitive information without understanding how it could be stored, accessed or disclosed.”
With many chatbots built off complex software and cloud services, “privacy and security can fail even when an AI developer’s own code and policies appear sound. A single weakness in a data center can expose chatbot data and erode confidence,” the AMA said.
Besides privacy and security-related issues, the AMA urged Congress to address a variety of other concerns, including regulatory gaps that potentially endanger patients.
For instance, “clear statutory boundaries should be established that prohibit AI chatbots from engaging in diagnosis or treatment of mental health conditions, such as offering a diagnosis of anxiety or depression or recommending medications,” the AMA said. “Any such action should trigger mandatory review by the Food and Drug Administration as a medical device.”
The AMA’s letters to the co-chairs of the Congressional Artificial Intelligence Caucus, the Congressional Digital Health Caucus, and the Senate Artificial Intelligence Caucus, also warned that the rapid rise of mental health chatbots – “along with reports of risks such as encouraging self-harm and privacy breaches” – underscore the urgency for clear guardrails to protect patients and public trust.
Dr. John Whyte, CEO of the AMA, told ISMG the organization also has some serious concerns about the use of AI chatbots by patients and consumers for healthcare purposes beyond mental health.
“AI chatbots can be incredibly helpful to patients trying to better understand a diagnosis, a recommended treatment plan or even lab results. They can be very helpful in helping patients prepare for doctor’s appointments and know the right questions to ask,” he said. “However, chatbots cannot exercise actual clinical judgment and should never replace medical decisions made between a patient and their physician.”
Also, many consumer-oriented AI tools have few regulations protecting patient data and are not covered by laws protecting the privacy of personal medical information, such as HIPAA, he said. “As a result, patients should be cautious about sharing personal medical information with AI tools because medical privacy cannot be retrieved once it is lost.”
Not Alone
The AMA isn’t the first medical organization to raise concerns regarding AI chatbots.
Earlier this year, chatbots were identified as the top health technology hazard in 2026 by the patient safety research organization ECRI Institute (see: Chatbots, Outages, Devices Top 2026 Health Tech Hazards).
ECRI researchers said that unlike regulated medical technologies, AI tools – including chatbots – that are broadly accessible on phones and laptops – are not designed or validated for clinical use. Yet, many patients are using them to self-diagnose and treat their conditions.
A handful of lawmakers have made moves to address some of the AI-related issues also spotlighted by the AMA. For instance, Sen. Marsha Blackburn, R-Tenn., is sponsoring legislation, the “Trump America AI Act,” that would require AI platforms to implement use restrictions and certain privacy controls for minors.
Some AI chatbot developers are also fine-tuning their general purpose tools for healthcare related use. For instance, in January, OpenAI said it’s rolling out a version of ChatGPT dedicated to health that will also “securely” connect users’ medical records and wellness apps to better personalize responses.
OpenAI said ChatGPT Health will operate as a separate space with “enhanced privacy to protect sensitive data.” (see: ChatGPT Health: Top Privacy, Security, Governance Concerns).
