Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
OpenAI, Meta and 8 Other Chatbots Use Disinformation Network as Source
Popular artificial intelligence chatbots are rife with Russian disinformation, warns NewsGuard, the rating system for news and information websites.
See Also: Safeguarding Election Integrity in the Digital Age
Researchers at NewsGuard entered prompts into 10 chatbots, including OpenAI’s ChatGPT-4, Elon Musk’s Grok and Mistral and found that about one-third of the responses contained disinformation culled from a network of fake local news sites and YouTube videos created by John Mark Dougan, a U.S. fugitive who obtained political asylum in Russia.
Microsoft’s Copilot, Meta AI, Anthropic’s Claude and Google Gemini were also part of the study.
The company tested nearly 600 prompts based on 19 false narratives linked to the Russian disinformation network, such as false claims about corruption by Ukrainian President Volodymyr Zelenskyy.
The chatbots regurgitated misinformation found on Dougan’s sites as fact, such as a supposed wiretap discovered at former President Donald Trump’s Mar-a-Lago residence, said NewsGuard.
The chatbots failed to recognize that sites such as “The Boston Times” or “The Houston Post” are Russian propaganda fronts – likely created with the assistance of AI. “This unvirtuous cycle means falsehoods are generated, repeated, and validated by AI platforms,” NewsGuard said.
The company said it did not score each chatbot for the amount of disinformation it pushed, since the issue was “pervasive across the entire AI industry rather than specific to a certain large language model.”
The findings come at a time when people have begun to rely on sources such as social media influencers and AI chatbots for quick, customized information.
AI disinformation has been rife this election year, as bad actors weaponize the technology to generate video and audio deepfakes to spread misinformation (see: APT Hacks and AI-Altered Leaks Pose Biggest Election Threats).
Social media companies and AI giants have pledged to curb misuse of the technology to propagate false information that could influence elections. OpenAI recently found that threat actors conducting covert influence campaigns also relied on AI chatbots.