Artificial Intelligence & Machine Learning
,
Cybercrime
,
Election Security
FBI Sees Rising AI-Enabled Fraud; Meta Reports Scant Election Interference Use
Artificial intelligence: What’s it good for? Per the old song about war, the answer isn’t “absolutely nothing,” but so far it also isn’t “absolutely everything.”
See Also: How to Take the Complexity Out of Cybersecurity
New findings pinpoint where generative AI and deepfakes are hot – fraud – and where they’re not – election interference.
That election interference assessment arrives at the close of a year in which over 2 billion people in more than 50 countries voted in major elections, including in countries such as the United States, India, Indonesia, France, Germany, Mexico, Taiwan and the United Kingdom.
Despite the stakes, Facebook, Instagram and WhatsApp parent company Meta found that less than 1% of misinformation pertaining to elections, politics and social issues that got posted across its sites this year was AI-generated, said Nick Clegg, its president of global affairs, in a Tuesday blog post.
While “the risk of widespread deepfakes and AI-enabled disinformation campaigns” remains real, “these risks did not materialize in a significant way” this year, at least on Meta’s platforms, said Clegg, the former deputy prime minister of Britain.
Last month, the Center for Emerging Technology and Security said in a report that they found “no evidence that AI-enabled disinformation had measurably altered an election result” in any major election.
Researchers found that “deceptive AI-generated content did shape U.S. election discourse by amplifying other forms of disinformation and inflaming political debates.” While this remains difficult to measure, owing to a paucity of data on the real-world effect on voter behavior, the researchers said the real-world impact appeared minimal.
Overstating the threat posed by foreign interference efforts risks playing into adversaries’ hands by amplifying their efforts. Analyzing the threat posed by deepfakes to democracy, the first head of Britain’s National Cyber Security Center, Ciaran Martin, said in June that “the reality is that so far the U.K. has suffered very little from successful cyber interference in elections.” While Russia did attempt to disrupt the 2014 Scottish referendum, a parliamentary investigation found “that Russian efforts have been mostly risible.”
Even so, such efforts continue. As part of its efforts to combat what it calls “coordinated inauthentic behavior,” Meta said that this year alone it took down 20 new covert influence operations. Since 2017, the company said it has disrupted 39 covert influence operations attributed to Russia, 30 tied to Iran and 11 tied to China.
Clegg said the company has seen attempts to run such operations – especially by Russia, which continues to dominate – shift away from Facebook toward “platforms with fewer safeguards than ours,” such as X and Telegram.
Salad Days for AI-Enabled Fraudsters
The picture is different on the fraud front. Need to build a believable-looking cryptocurrency investment site, perhaps with a claimed endorsement from a real celebrity? Need to create a large volume of legitimate-looking social media profiles and activity? Want to trick someone who speaks a foreign language into remotely falling in love with you and remitting lots of money for your supposed medical problems? Need to all of this at scale, and in less time?
This week the FBI warned that criminals have been doing all that and more by doubling down on their use of gen AI and deepfake tools.
“Criminals use AI-generated text to appear believable to a reader in furtherance of social engineering, spear phishing and financial fraud schemes such as romance, investment and other confidence schemes, or to overcome common indicators of fraud schemes,” it said.
More advanced use cases investigated by law enforcement include criminals using AI-generated audio clips to fool banks into granting them access to accounts, or using “a loved one’s voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom,” the bureau warned.
Key defenses against such attacks, the FBI said, include creating “a secret word or phrase with your family to verify their identity,” which can also work well in business settings – for example, as part of a more robust defense against CEO fraud (see: Top Cyber Extortion Defenses for Battling Virtual Kidnappers).
Many fraudsters attempt to exploit victims before they have time to pause and think. Accordingly, never hesitate to hang up the phone, independently find a phone number for a caller’s supposed organization, and contact them directly, it said.
The FBI’s warnings are far from academic. At a recent cybersecurity event I attended, security professionals detailed how phishing attacks are growing increasingly difficult for employees to spot and report, apparently thanks to criminal use of gen AI for messaging. “If it’s poorly written, it’s probably actually from HR,” one CISO said.