AI-Based Attacks
,
Fraud Management & Cybercrime
Abuse Can Lead to Fraud, Impersonation Scams

Need a new voice? Artificial intelligence has you covered. Need to protect your own? That’s another story. Some of the most widely used AI voice synthesis tools offer only superficial safeguards against misuse – if any at all, researchers found in a recent analysis.
See Also: Top Three Cyber Predictions for 2025
Popular AI-powered voice cloning tools lack sufficient safeguards to prevent misuse, a Consumer Reports study found, raising concerns over the potential for fraud and impersonation scams. The study assessed voice cloning products from six companies: Descript, ElevenLabs, Lovo, PlayHT, Resemble AI and Speechify, and found that four out of the six providers offer minimal or no meaningful security measures against abuse.
ElevenLabs, Speechify, PlayHT and Lovo require only basic user self-attestation, such as checking a box confirming that they have the legal right to clone a voice. Descript and Resemble AI, in comparison, implemented more robust mechanisms to deter misuse.
Some of these companies require only an email address and name for account creation, making it easy for malicious actors to access the software. Without rigorous identity verification or explicit consent mechanisms, bad actors could exploit these tools for deceptive practices, including financial fraud and impersonation.
AI-driven voice cloning has advanced to the point where it’s easy to generate highly realistic imitations of real people. This technology has legitimate applications, such as text narration, assisting individuals with speech impairments and automating customer support; it also presents serious risks (see: Cloned Voice Tech Is Coming for Bank Accounts).
Consumer Reports policy analyst Grace Gedye said that the absence of safeguards could “supercharge” impersonation scams. “Our assessment shows that there are basic steps companies can take to make it harder to clone someone’s voice without their knowledge – but some companies aren’t taking them,” she said.
The Federal Trade Commission reported more than 850,000 impostor scams in 2023, resulting in $2.7 billion in losses. Although it is unclear how many of these involved AI voice cloning, high-profile cases of fraudulent audio deepfakes have already emerged (see: Attack of the Clones: Feds Seek Voice-Faking Defenses).
Some companies explicitly market their software for deceptive purposes. Consumer Reports found that PlayHT promotes “pranks” as a legitimate use case for its AI voice tools. Speechify similarly suggests prank phone calls as an application.
Some major AI firms take a more cautious approach. Microsoft has opted not to publicly release its VALL-E 2 voice synthesis project due to concerns about impersonation risks. OpenAI has restricted access to its Voice Engine, citing similar threats.
The FTC finalized a rule last year prohibiting AI-generated impersonations of government entities and businesses, but a broader ban on individual impersonation is still at the proposal stage. Gedye suggested that state-level regulatory action may be more likely than federal intervention, given ongoing efforts to weaken federal consumer protection agencies.