Artificial Intelligence & Machine Learning
,
Finance & Banking
,
Fraud Management & Cybercrime
Easy-to-Use Deepfake Services for Criminals Rapidly Improving, Researchers Warn

Financial firms’ fraud and risk teams must bolster know-your-customer safeguards in the face of increasingly effective deepfake technology, new research warns.
See Also: AI Browsers: the New Trojan Horse?
Rapid improvements in deepfake generation and other tools available to fraudsters are already challenging the integrity of a variety of remote identity checks, including biometric “liveness” tests, and the problem is set to get worse, says a report tracking cybercrime trends and strategies for countering them, published by the World Economic Forum’s Cybercrime Atlas project.
Especially when layered together, tools and techniques for defeating identity checks already pose “financial, operational and systemic risks for any institution that relies on digital trust,” including to comply with know-your-customer requirements, the report says.
Fraudsters’ interest in the latest artificial intelligence-enabled tools and their capabilities remains strong. Following the release of ChatGPT in November 2022, cybersecurity firm Group-IB tracked a surge in discussions across darknet forums and Telegram channels pertaining to AI in 2023, and it hasn’t let up.
For cybercriminals, what’s now available on underground markets includes a variety of ready-to-use synthetic identity kits, with an individual fake identity costing $5 to $15, while deepfake-as-a-service subscriptions, providing ready-to-use images or video, can be had for $10 to $50 per month from such service providers as Darkpaint, Shawtyclub and Rysuca, Group-IB said. Deepfake tools that promise “turnkey face-swapping” retail for anywhere from $1,000 to $10,000 under such names as Haotian AI and ChenxinAI. Voice impersonation tools retail for less, costing anywhere from $1,000 to $3,000, under such names as BoltFox, Gorilla p1 bot, Google Voice P1 and Stunna.
Such services lower the barrier to entry for AI-curious miscreants. Creating a convincing deepfake of a real person is easier than ever, thanks not just to increasingly capable tools but an abundance of digital source material.
“Attackers harvest samples from social media, webinars or even past phone calls,” and “with as little as 10 seconds of audio, fraudsters can now create a convincing clone of a colleague, superior or family member,” Group-IB said.
What’s now commercially available to criminals – in the form of legitimate tools or tools developed for illicit use – isn’t perfect, but it’s improving rapidly, said World Economic Forum report author Natalia Umansky.
“Our analysis of 17 face-swapping tools and camera injection techniques reveals a clear shift: while many tools remain imperfect, some already enable real-time, high-fidelity impersonation capable of defeating digital KYC. Threat actors are increasingly combining AI-generated or stolen identity documents, high-quality face swaps and camera injection to bypass live verification,” she said.
Of the 17 face-swapping tools reviewed, 11 are marketed to creative, entertainment or social media purposes and two to security testing purposes, including red teams.
The report warns that face-swapping tools can be used to create entirely fictional personas, or so-called synthetic identities, for opening accounts and committing fraud. Such tools can also be used to impersonate real individuals. In many cases, users can realistically turn their head, smile and blink while digitally wearing the face of another person, whether real or tool-generated. Some tools are expressly sold on cybercrime sites with a promise that they’re able to reliably defeat KYC checks.
The report predicts that in the next 12 to 15 months, criminal access to more advanced face-swapping types of tools will increase, as will illicit targeting of financial services firms and cryptocurrency platforms, as well as attempts to bypass KYC in sectors that have more recently adopted it.
Many new KYC enthusiasts hail from the gambling and telecommunications sectors, although any industry that relies on digital identity could be targeted (see: Proof of Concept: Bot or Buyer? Identity Crisis in Retail).
The WEF report also reviewed eight camera injection tools, ranging from free software to applications costing anywhere from $10 to $3,000, that are also used to bypass digital identity checks. Three tools worked only with pre-prepared media, while another three supported both pre-prepared media and livestreaming, albeit not always with perfect results.
“Overall, camera injection capabilities were found to be technically diverse but constrained by latency, content format requirements and detectable device artifacts,” including many tools being viewable as running processes in the operating system’s task manager, allowing unexpected drivers and processes to be spotted, the report says.
At least for now, most camera-injection tools “seem limited in their ability to reliably defeat modern KYC systems that use dynamic prompts and software development kit (SDK)-level integrity checks,” the report says.
Experts expect that will change, as criminals develop better virtual camera drivers that provide greater stealth, alongside software offering improved real-time streaming capabilities with lower latency and discernable errors, especially when used with high-end graphics cards.
While detecting and flagging such fakery for fraud review teams will fall in part to financial firms’ KYC software vendors, the WEF also recommends updating governance processes to account for the fast-evolving technology, regularly testing technical and human-led defenses through red-teaming, and copious public-private intelligence sharing to track the latest innovations.
