The flood of new artificial intelligence tools, including those to help cybersecurity teams, can be overwhelming to healthcare CISOs and their security staff, fueling “AI fatigue” that in itself can create additional cyber risk, warned Drew Henderson and Jon Hilton, practice leaders at consulting firm LBMC.
“You have a lot of new vendors that have popped up offering AI services, so it can really overwhelm teams. It can overwhelm leadership,” said Hendrickson, leader of LBMC’s cybersecurity practice.
“It’s really important for those leaders to pick their partner. Make sure they trust that partner, especially if they’ve got a long-term working relationship with them to work through what those blind spots might be specific to security, risk and AI,” he said in an interview with Information Security Media Group.
“Leaders are really being inundated with AI,” said Hilton, leader of LBMC’s AI practice. For healthcare CISOs and others, “there’s expectations either from those that you report to, or those that are reporting to you, or just part of your team, to do something about it, while at the same time there’s all kinds of concerns and risks that are out there,” he said.
Sometimes AI tools that aim to improve cybersecurity can also tax security teams, Hendrickson said. “A lot of new products and existing products have built these AI features into their tool sets, and they tend to produce an overwhelming volume of alerts,” he said.
“If it’s not fine-tuned to the organization that leads to a lot of false positives” he said. “And when you’re getting a lot of false positives, the human in the loop that’s monitoring those is going to experience AI fatigue and exhaustion, and that’s going to lead to inefficiencies and identifying what are real vulnerabilities or threats to the organization,” he said.
On top of all that, security leaders and their teams are faced with the potential threats AI can pose to their organizations in the hands of adversaries.
A recent report by security firm LevelBlue found that only 29% of healthcare executives say they are prepared for AI-powered threats, despite 41% believing they will happen.
In this audio interview with Information Security Media Group (see audio link below photo), Hilton and Hendrickson discussed:
- Risks related to AI fatigue in healthcare;
- Identifying AI fatigue in security leadership and their teams;
- Tips to help avoid and overcome AI fatigue.
Hendrickson is shareholder and practice leader of LBMC’s cybersecurity practice. Hendrickson has more than 20 years of experience as a security professional. Hendrickson also has worked with the Association of International Certified Professional Accountants to lead and educate other CPA firms on best practices for SOC reporting and has assisted in the development and delivery of training materials for cybersecurity courses.
Hilton is an AI and data analytics executive helping companies navigate the evolving landscape of artificial intelligence to build their AI future. As shareholder-in-charge of LBMC’s AI practice, Hilton leads strategic initiatives focused on practical AI use cases and AI agents. Hilton is a graduate of the U.S. Military Academy at West Point and served as an Officer in the U.S. Army from 2004 to 2009.