Use cases for artificial intelligence in healthcare will continue to explode in 2026 – including for back-office automation, ambient clinical documentation in exam rooms, claims processing and clinical decision support. So will critical privacy, security, legal and other risk considerations, said attorney Wendell Bartnick of law firm Reed Smith.
“I really think it comes down to governance,” Bartnick said in an in-depth interview with Information Security Media Group about the expanding AI opportunities and associated risks in healthcare.
The types and degree of risks vary with different kinds of use cases of AI in healthcare but all entities considering AI deployments need to do their governance homework, he said.
“Until there’s more regulation, the place I would go is to look at the National Institute of Standards and Technology’s AI Risk Management Framework,” he said. “I think that’s a really good foundation for just understanding these risks.”
In the interview (see audio link below photo), Bartnick also discussed:
- Top security, privacy, regulatory, legal risks and pitfalls involving different use case example for AI in healthcare;
- Concerns involving agentic AI use, mental health chatbots and patient encounter recordings;
- The dangers of patients using AI to self-diagnose and treat themselves.
Bartnick, a partner at law firm Reed Smith, utilizes his background in computer science in counseling clients across a variety of industries, including healthcare, life sciences and biotech. His advice commonly involves data rights and privacy, cybersecurity, commercialization, licensing technology – including AI and machine learning – governance, partnering strategies and agreements, and other regulatory compliance counseling, as well as investigations related to technology and data issues.
