Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Report Also Flags Threats Linked to North Korea, Iran

Chinese influence operations are using artificial intelligence to carry out surveillance and disinformation campaigns, OpenAI said in its latest threat report.
See Also: The Comprehensive Guide to Cloud Security and SOC Convergence
The report details two major Chinese campaigns that misused AI tools, including OpenAI’s own models, to advance state-backed agendas.
One operation, dubbed “Peer Review,” involved creating an AI-powered social media monitoring tool designed to track anti-China sentiments in Western countries. OpenAI researchers identified this campaign when they detected an individual using the company’s AI technology to debug code for the surveillance platform. OpenAI principal investigator Ben Nimmo said in a press briefing that this is the first instance of an AI-powered surveillance tool being exposed in this manner. Researchers believe the tool was built using Meta’s open-source AI model Llama.
The surveillance tool appears to generate real-time reports on protests and dissident activities, feeding intelligence back to Chinese security services. OpenAI banned accounts associated with the project, stating that such use violates its policies against AI-powered communications surveillance and unauthorized monitoring of individuals.
OpenAI additionally identified a campaign it labeled “Sponsored Discontent,” which weaponized AI to disseminate anti-U.S. narratives in Spanish-language media. The campaign generated and translated articles that criticized American society and politics, distributing them across Latin American news platforms, including as sponsored content. It also involved automated English-language social media comments attacking Chinese dissident Cai Xia.
Nimmo said that this was the first known case of a Chinese influence operation systematically translating and publishing long-form articles in Spanish for Latin American audiences. Without OpenAI’s visibility into the use of its models, he said, it would have been difficult to connect the campaign’s social media activity with its broader efforts in news media.
OpenAI also flagged other AI-fueled cyberthreats, including scams and influence operations linked to North Korea, Iran and election interference efforts in Ghana.
The report suggests that as open-source AI models grow more advanced and can be deployed locally, detecting and countering such misuse will become more challenging. In the “Peer Review” case, OpenAI found references to ChatGPT and DeepSeek and Meta’s Llama 3.1, suggesting that the operators were testing or combining multiple models to obfuscate their activities.