3rd Party Risk Management
,
Artificial Intelligence & Machine Learning
,
Governance & Risk Management
Netskope Says Firms Are Using Blocking Controls, DLP But Still Face Security Gaps
It’s been nearly 18 months since ChatGPT paved the way for rapid generative AI adoption, but enterprises are just beginning to implement basic cybersecurity strategies and use blocking controls, DLP tools and live coaching to mitigate gen AI risks, according to security firm Netskope.
See Also: Safeguarding against GenAI Cyberthreats with Zero Trust
Organizations are implementing security measures to mitigate risks and prevent users from sharing sensitive data such as personal identifiable information, credentials and trade secrets with AI applications, Netskope said after analyzing anonymized AI app use data.
Security mechanisms are needed, Netskope said, because more than one-third of the sensitive data shared with generative AI applications is regulated, meaning that organizations are legally required to protect it.
“Enterprises must recognize that gen AI outputs can inadvertently expose sensitive information, propagate misinformation or even introduce malicious content. It demands a robust risk management approach to safeguard data, reputation and business continuity,” said Netskope CISO James Robinson.
About 3 in 4 organizations use block/allow policies to limit the use of at least one, and sometimes several, generative AI apps by employees – a 53% increase over last year.
The 13-month study, which concluded June 30, did not specify the number of organizations surveyed but the company said it included organizations in the financial services, healthcare, manufacturing, telecom and retail industries that have more than 1,000 active users.
Half of the organizations surveyed blocked more than two generative AI apps due to security concerns, and some restricted up to 15. Most companies blocked Beautiful.ai, Writesonic, Craiyon and Tactiq.
Security organizations increased their use of DLP tools to secure AI, which Netskope attributes to maturing enterprise security strategies. This year, 42% of organizations used these tools to manage what users can input into generative AI tools, compared to about half that number in June 2023.
Live coaching controls that warn users in real time about potentially risky interactions with AI apps are used by 31% of organizations, compared to 20% in June 2023.
More effort has been spent controlling the data sent to generative AI services than managing the risks associated with the responses from these services. While most organizations do have acceptable use policies on how to handle the data generated, few have built-in mechanisms for dealing with factually wrong or biased data, manipulated results, copyright infringement and fabricated responses.
Organizations can mitigate these risks through vendor contracts, indemnity clauses for custom apps and by using only corporate-approved apps with high-quality datasets. Logging and auditing all returned datasets -including time stamps, user prompts and results -also can help.
The growing focus on AI app security comes along with enterprises’ rapid adoption of AI tools. Netskope’s survey says that 96% of the people in its customer companies use generative AI apps to assist with coding, writing, creating presentations and generating images and videos – compared to 74% in June last year.
On an average, organizations use three times as many generative AI apps, and nearly three times more users engage with these tools than a year ago. The top 1% of adopters average 80 generative apps within their environments.
ChatGPT continues to be the most-used app, followed by Grammarly, Microsoft Copilot, Google Gemini and Perplexity AI – although Perplexity AI is also among the top most frequently blocked apps.
AI technology has garnered considerable investment in line with its popularity, and companies in the space have received $28 billion equity funding from 2020 through March 2024.