Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
AI Could Undermine Trust in Democracy, Starting With This Very Statement
Artificial intelligence holds the potential to undermine trust in democratic politics – but overwrought warnings themselves can erode trust in the system that critics seek to preserve, warns a cybersecurity firm.
See Also: Live Webinar | Unmasking Pegasus: Understand the Threat & Strengthen Your Digital Defense
The political underworld has already disgorged deepfake audio and video clips, including one of a Chicago mayoral candidate supposedly condoning police aggression and one of a U.S. Democratic senator purportedly claiming that Republicans should be barred from voting.
The quality of AI-generated content is good enough for some academics to worry that unregulated AI will be used to “hacked humans.”
Israeli cybersecurity firm Checkpoint says such warnings carry their own danger, given that AI is “still a long way from massively influencing our perception of reality and political discourse.”
“What is deteriorating is our trust in the public discourse, and unbalanced warnings might erode this trust further,”
Political scientists envision the possibility of politicians gaining power by using cheap, AI-generated content to target individual voters with personalized messages. “The winner would be the client of the more effective machine,” wrote Harvard professors Archon Fung and Lawrence Lessig earlier this year. The outcome would be elections that don’t necessarily reflect the will of voters.
“Voters would have been manipulated by the AI rather than freely choosing their political leaders and policies,” they wrote.
But it’s also possible that citizens react to a rise in falsehoods by insisting more on truth, Pinkas said.
“In a reality where it is increasingly challenging to identify forgeries, the source and context become paramount. In such a situation, the credibility of a person and information source becomes increasingly crucial. When the truth is endangered, deterrence is created through a decreased tolerance for lies and deceivers.”
Traditional media, with its commitment to accuracy and identity verification that seeks to segregate humans from bots, could also become more important going forward, he said.
Regardless, generative AI still has yet to achieve the level of corruption of the public discourse anticipated by worst-case scenarios. That puts anyone warning about its corrosive effects in the difficult position of having to ensure that current distrust isn’t eroded further without proof, Pinkas said.
“Just as the overstated focus on the vulnerabilities of voting machines might have inadvertently weakened democratic resilience by eroding public trust in voting mechanisms, we could be facing a similar peril with AI.”