AI-Based Attacks
,
Artificial Intelligence & Machine Learning
,
Fraud Management & Cybercrime
In Today’s Reality, Zero Trust Principles Matter, Verification Is an Imperative

Earlier this month, a California judge made history by throwing out an $8.7 million lawsuit after discovering something that had never before appeared in her courtroom: deepfake testimony. The video looked convincing at first glance – a witness speaking directly to the camera, providing crucial evidence.
See Also: When Identity Protection Fails: Rethinking Resilience for a Modern Threat Landscape
But in the nick of time, the judge noticed the lips moved out of sync with the voice and the eyes never blinked. What followed was even more alarming: nine pieces of fabricated evidence, including doctored photos and fake text messages, all created using readily available AI tools. It wasn’t the work of sophisticated cybercriminals. The materials were created by the plaintiff who was representing himself. He had no technical expertise and used the same consumer-grade AI tools available to anyone.
The judge issued a warning, saying generative AI should only be used with great caution in court. But these new lessons for the legal community are already a reality in business and sit at the heart of every transaction: The need for trust, verification and authentic communication.
Many fear an impending explosion of deepfake fraud with recent advancements in deepfake video and audio creation tools. Google in May launched Veo 3, a cinema-quality text-to-video generator, and this week OpenAI unveiled invitation-only access to Sora 2, a video generation model described as “more physically accurate, realistic and more controllable than prior systems,” featuring synchronized dialogue and sound effects with a “high degree of realism.”
“Video models are getting very good, very quickly,” OpenAI said in its announcement. “General-purpose world simulators and robotic agents will fundamentally reshape society and accelerate the arc of human progress.”
But AI tools are already notoriously good at fooling victims. For example, Harvard researchers demonstrated that six major AI chatbots could be coaxed into writing convincing phishing emails that fooled senior citizens in controlled tests. In scam centers in Myanmar, criminals rely on AI tools to generate content for romance scams, enabling one fraudster to trick dozens of victims with fake personas, love messages and poems.
Now the same technology that promises to revolutionize businesses has given bad actors sophisticated capabilities to defraud them. For businesses built on verified identities, authenticated documents and trusted communications, it’s nothing less than a crisis.
Wire transfer protocols assume that voice verification works. Hiring decisions are made using video interviews. Customer service relies on security questions and call-back numbers. Every one of these practices is now outdated.
The reality is that we’ve spent decades building verification systems designed to catch human fraudsters making human mistakes. We trained employees to spot spelling errors in phishing emails, to notice awkward phrasing, to be suspicious of unusual requests. Those training programs are now useless. AI-generated fraud bypasses all the red flags employees have been taught to recognize.
While CISOs and fraud practitioners typically rely on technology to solve new problems, detection tools will always lag behind AI capabilities. The principles of zero trust – never trust, always verify – have become even more important to business operations.
This means a typical wire transfer protocol can’t rely on voice confirmation from a number provided by the requester. Banks and other financial services firms need a separate channel for verification. Yes, it will frustrate users, and it might be slower, but we need to do it or be willing to accept fraud losses as a cost of doing business. Even hiring process can no longer rely solely on visual or audio confirmation.
In other words, we must sacrifice convenience and speed for the integrity of verification. Businesses need to stop celebrating the “verify fast” culture, which is incompatible with today’s fraud environment.
The first step is understanding your weaknesses. Conduct vulnerability audits and map every process that relies on identity and documents verification. Implement emergency protocols for high-risk processes. Wire transfers, payment authorizations, credential changes and legal evidence handling should meet immediate, out-of-band verification requirements. Employee education must evolve to ensure employees follow verification protocols even if everything looks legitimate.
The deepfakes used in that California courtroom were clumsy. But soon forgeries will be indistinguishable from the real thing. We need to act now – while we still have time.