Preparing a healthcare workforce to responsibly engage with AI tools without over relying on automation or undermining human oversight will require awareness training akin to phishing exercises, said Skip Sorrels, field CTO and CISO at security firm Claroty.
“There is an essence of tabletop exercise. I can envision a simulated phish, but in the form of AI,” Sorrels said.
For instance, “they’re going through the quiz situation, they’re using AI and it in essence trains them that that is not something that should go into AI, this is an open platform, it’s not secure, it’s not secure inside the walls of the university or the hospital,” he said.
Those types of scenario-based training are going to be most meaningful “because AI means a lot of different things to different people, and so I think it has to be situational-based,” he said.
The preparation and training of the healthcare workforce to use AI responsibly and securely is just one of several important facets of what’s needed for the governance of healthcare AI, he said.
In this audio interview with Information Security Media Group (see audio link below photo), Sorrels also discussed:
- Other critical facets of governance of healthcare AI;
- How governance of AI in healthcare must keep up with the increasingly sophisticated use of AI in the hands of threat actors;
- Public and private sector collaboration in creating strong framework for AI healthcare;
- Healthcare AI developments to watch in the year ahead.
Sorrels has more than 25 years of experience as a cybersecurity leader building and scaling robust security programs, particularly in healthcare. He began his career as a nurse in Texas before transitioning into technology, contributing to cybersecurity architecture and solutions for the U.S. Department of Defense at Dell. Before joining Claroty, Sorrels was director of cybersecurity at Ascension Healthcare, where he built and led a medical device security program that evolved into broader OT and XIoT cybersecurity initiatives.
