Artificial Intelligence & Machine Learning
,
Fraud Management & Cybercrime
,
Fraud Risk Management
Google Is Getting Accolades for Veo 3, But the AI Video Tool Has a Darker Side

“Insane. Big trouble for deepfakes. Disinformation will reach another level.”
See Also: OnDemand | Navigate the threat of AI-powered cyberattacks
This was the reaction of fraud fighters after the launch of Google’s Veo 3 on May 20.
Meanwhile, AI enthusiasts are saying Veo 3 is one of Google’s best products, and it does exactly what it promises. The mind-blowing AI constructs cinematic video clips from text prompts, and the results look and feel entirely real. But – of course there is a ‘but’ with any great breakthrough – there is a darker side. Veo 3 pushes deepfake capabilities into uncharted territory. Its creative power introduces new threats to truth, trust and authenticity.
Here’s an example of videos generated by Veo 3 with simple, moderate prompting of interviews at a fictitious car show.
Frank McKenna, chief fraud strategist with Point Predictive, posted a video in his blog on a fictitious fraud conference called Fraud Fighters Unite. The video shows “experts” talking about the top fraud trends including check fraud, online fraud and auto loan scams. The conference promises to bring together more than 2,000 fraud fighters under a single roof. The video ends with a special message from a hacker – all thanks to AI.
It would be unfair to place all the blame on Veo 3. Long before Veo 3 became a reality, fraudsters have been using deepfake videos to fool victims. For instance, in 2023 fraudsters tricked a finance employee at British engineering firm Arup into transferring $25 million into their accounts. The scammers used a deepfake video call featuring AI-generated avatars of the company’s CFO and other executives, making the interaction appear real.
Clearly, advancements in this technology pose massive risks to businesses and consumers alike. Sadly, it is getting harder to detect a fake video from a real one. Early on, one could spot signs of AI-generated videos such as misshapen hands, glitchy faces and odd phrasing. But today’s models are so polished that even experts struggle to tell the difference.
Detection tools exist, but they are far from perfect. OpenAI’s own AI classifier was taken down because it couldn’t reliably tell apart AI and human-written text. Google’s SynthID watermarking technology is a good breakthrough, but a study found that researchers were able to steal a digital watermark pattern that was similar in concept to SynthID.
This means we need better solutions – and fast.
While no one wants to put the brakes on innovation, the rise of AI and AI-generated videos correlates with a peak in global scam activity. A report by the Global Anti-Scam Alliance in collaboration with Feedzai says scammers siphoned off more than $1.03 trillion globally in just the past year. The financial toll of scams is staggering, with the U.S., Denmark and Switzerland reporting the highest losses per victim. Americans on average lost $3,520. It’s likely that these AI-generated video capabilities will fuel a surge in romance and investment scams.
Ironically, innovation in the wrong hands and for the wrong purposes may slow down digital transformation and move us back to the good old days of analog in-person real world interactions. Not that I am a proponent of it, but it is getting harder every day to trust anything digital. At what point will we lose complete trust in what we see and hear?
Governments and a variety of government agencies are racing to regulate AI-generated content amid rising concerns over misinformation. The United Nations has identified AI disinformation as a global security threat and is calling for ethical guidelines and an international AI watchdog group. The European Union’s AI Act mandates watermarking of synthetic content and stricter controls for high-risk uses. China requires labeling and approval for deepfakes, and the United Kingdom has launched an AI Safety Institute and is considering watermarking on political ads.
While regulations have their limitations, governments play a crucial role in ensuring their enforcement. Critics argue that governments may act with bias or hide inconvenient truths, but who else is positioned to oversee AI’s responsible use?
Current AI detection tools help, but using AI to detect generative AI deepfakes is like deploying antivirus software against known threats.
AI will fail against the unknown by design. And that means “big trouble.”