Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Low-Impact Disinformation Campaigns Based in Russia, China, Iran, Israel
OpenAI said it disrupted five covert influence operations including from China and Russia that attempted to use its artificial intelligence services to manipulate public opinion amid elections.
See Also: The 4 Gaps in Your AI Governance Strategy: Your AI Governance Checklist
The threat actors used AI models to generate short comments and longer articles in multiple languages, made up names and bios for social media accounts, conducted open-source research, debugged code, and translated and proofread texts, the company said.
The operations do not appear to have made much impact on audience engagement or in spreading manipulative messages, rating two on the Brookings Breakout Scale that measures impact of influence operations. A score of two on a low-to-high scale that tops out at six implies that the manipulative content appeared on several platforms, but did not have a breakout impact on the audience.
A separate recent report from the Alan Turing Institute on the electoral impact of AI-powered covert influence campaigns similarly found that AI has had a limited impact on outcomes – but that it also creates second-order risks such as polarization and damaging trust in online sources (see: UK Government Urged to Publish Guidance for Electoral AI).
The campaigns OpenAI discovered have been linked to two operations in Russia, one in China, one in Iran and a commercial company in Israel.
A Russian operation dubbed “Bad Grammar” used Telegram to target Ukraine, Moldova, Baltic states and the United States. The other, called “Doppelganger,” posted content about Ukraine.
The Chinese threat actor Spamouflage supported China’s work and slammed critics, while the Iranian operation called “Union of Virtual Media” praised Iran and condemned Israel and the United States. The Israel-based operation was run through on a private company Stoic, which created content about the Gaza conflict and the Israeli trade union Histadrut.
OpenAI said it was able to identify the AI-supported influence operations due to lack of due diligence from the threat actors. Bad Grammar gave itself away when its propaganda campaigners forgot to remove “refusal messages from our model, exposing their content as AI-generated.”
Experts have consistently sounded warningsU.S. federal government and industry players have attempted to get ahead of the threat, with awareness campaigns and cross-industry partnerships.
Industry experts said they’re surprised at the “weak and ineffective” AI-led disinformation campaign. “We all expected bad actors to use LLMs to boost their covert influence campaigns – none of us expected the first exposed AI-powered disinformation attempts to be this weak and ineffective,” said Thomas Rid, a professor at Johns Hopkins University’s School of Advanced International Studies.