As healthcare organizations embrace and innovate using generative artificial intelligence tools, it’s critical they also take a holistic approach towards addressing privacy and security governance, said Dave Perry, digital workspace operations manager at St. Joseph’s Healthcare in Hamilton, Ontario, who discussed how his organization is tackling the challenges.
“We didn’t want to just throw AI out there for everybody without any rails around it. So right away when GPT was released, the first thing I thought of was governance,” he said.
Perry and his team at St. Joseph’s Healthcare, which includes a 700-bed hospital, along with collaboration with engineering professors and students at the affiliated McMaster University, developed a “living” gen AI protocol to help standardized governance, including privacy and security.
That included using a gen AI platform Prompt Security to provide governance “granularity that administrators and cybersecurity teams are looking for,” he said.
It also addresses critical questions such as, “Where do people fit in? What is acceptable use? What is transparency?” he said. “We now have a founding document that we can create policies around that we can apply to our approach to gen AI,” he said.
In the interview (see audio link above), Perry also discussed:
- Top privacy and security-related governance issues involving gen AI in healthcare;
- Suggestions for other healthcare entities that are grappling with how to securely embrace the use of gen AI at their organizations;
- The most promising potential uses of gen AI in healthcare.
Perry has 26 years of experience in a variety of roles in system administration, cybersecurity and IT management. Leading the AI initiative at St. Joseph’s Healthcare, Perry has focused heavily on AI governance within the enterprise to ensure a complete, safe and ethical use of AI in a healthcare environment.