Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
AI-Led Disinformation Campaign, Deepfakes Biggest Threats, Experts Warn
Nation-state-led disinformation campaigns intended at eroding public trust are the biggest threat to the upcoming U.K. election, experts told a parliamentary panel on Monday.
See Also: Freeing Public Security and Networking Talent to do more with Automation
Speaking to the Joint Committee on the National Security Strategy inquiry into election security, a University of Nottingham academic told the panel that the worst outcome wouldn’t necessarily be hackers altering the election’s outcome. “Rather, it’s the wider impact on the perceived legitimacy of the elections and the trust people have on the outcomes of the election,” said Rory Cormac, professor of international relations.
“Hostile actors want us to turn in on each other because once they undermine this trust, it would be hard for the government to regain this,” Cormac said, adding that Russian, Iranian and Chinese threat actors are likely to be at the forefront of such campaigns.
Researchers from George Washington University in Washington, D.C., have predicted a summer deluge of election disinformation in a year that’s setting records for the number of citizens affected by balloting. The Economist has calculated that elections this year will affect more than 4 billion individuals, while the World Economic Forum determined they will determine leadership in countries that produce more than half of the world’s gross domestic product (see: AI Disinformation Likely a Daily Threat This Election Year).
These threat actors are also likely to co-opt developments in artificial intelligence to scale their operations, Cormac added.
Incidents of disinformation created with artificial intelligence have already appeared, including in September 2023 elections in Slovakia marred by a deepfake audio conversation putatively between the head of the country’s main social-liberal political party and a journalist discussing vote buying. Authorities in the U.S. state of New Hampshire are investigating robocalls mimicking the voice of President Joe Biden, generated by AI, that urged voters to stay home during the January primary. A Democratic consultant working for a long-shot challenger to Biden for the nomination later took responsibility for the calls, telling reporters that he did so to draw attention to the threat.
Social media platforms must take measures such as fake account removal and fact-checking news on their platforms, said Pamela San Martin, who serves on the Meta Oversight Board. Governments should also communicating directly with its citizens about the dangers of disinformation, she said.
One measure taken by the U.K. government to counter disinformation and threats from AI-enabled deep fakes has been to hold social media platforms more accountable with implementation of the Online Safety Act, which became law in 2023.
The regulation empowers the Office of Communications to shield young users from pornographic or self-harm content and the potential of criminal prosecution for those who send harmful or threatening communications.
Jessica Zucker, director of Online Safety Policy at Ofcom, said the regulator should begin to enforce the regulation before the elections.
Zucker added the department is currently working with the government in the interim to develop watermarking for AI contents in social media platforms, and are leading research into datasets used to create deep fakes to develop a codes of practice.