AI-Based Attacks
,
Fraud Management & Cybercrime
,
Identity & Access Management
Gartner’s Apeksha Kaushik on Why Detection Alone Can’t Stop ID Impersonation
Organizations facing deepfake-driven impersonation attacks must move beyond traditional detection strategies and build stronger identity resilience. Security leaders need layered defenses that combine detection, prevention and broader risk signals to disrupt attackers who exploit biometric and identity verification systems, said Gartner’s Apeksha Kaushik.
See Also: Proof of Concept: Bot or Buyer? Identity Crisis in Retail
“They not only need to have a tool which can detect if the voice is fake or the image is fake, but they also need to have other risk signals along with it – the mindset of actually deterring, deceiving and disrupting the threat actor rather than just detection and response,” said Kaushik, principal analyst at Gartner.
At the same time, generative artificial intelligence has expanded the identity threat landscape. Deepfake technology enables attackers to clone voices, generate synthetic faces and manipulate video streams, allowing fraud attempts across customer onboarding, contact centers and account recovery workflows.
In this video interview with Information Security Media Group, Kaushik also discussed:
- Why organizations must shift from detection-focused defenses to identity resilience strategies;
- How generative AI tools are expanding identity impersonation attacks across voice, video and biometric systems;
- Why organizations need cross-functional “trust operations” to manage deepfake risks.
Kaushik is a principal analyst and works within the ETT cybersecurity group at Gartner. She is focused on supporting CXOs, technology and service providers product leaders globally. She helps clients be security conscious, identify emerging technologies and trends affecting security, market dynamics and opportunities with buying preferences.

