Agentic AI
,
Cybercrime
,
Fraud Management & Cybercrime
Cyber Extortion Campaign Automated Efforts to ‘Unprecedented’ Degree, Says AI Giant

Artificial intelligence giant Anthropic said it’s disrupted a cybercrime operation that tapped its large language models to an “unprecedented” extent to help automate a data theft and extortion campaign.
The cybercrime group, tracked as GTG-2002, has targeted at least 17 organizations across such critical infrastructure sectors as healthcare, emergency services and government agencies, as well as religious institutions, Anthropic said in a threat report. While the attackers haven’t deployed ransomware, they have attempted to shake down victims, demanding in some cases over $500,000 for a promise to delete the stolen data.
Anthropic said the attack details show criminals are getting closer to being able to more fully automate their attacks, in part by using agentic tools such as its Claude Code that are designed to help developers code more quickly.
“Claude Code was used to automate reconnaissance, harvesting victims’ credentials and penetrating networks,” it said, including scanning VPN endpoints for known vulnerabilities. Attackers also used Claude to advise them on which data to exfiltrate, how much to demand and how to generate unique, “visually alarming ransom notes that were displayed on victim machines.”
The AI firm doesn’t note if attackers managed to steal any sensitive data, or if any of their shakedowns were successful and if so, how much they received in ransom payments.
Cybersecurity experts continue to track the evolution of AI tools toward the point at which they will allow even technically illiterate script kiddies to launch highly automated mass hack attacks with minimal effort. Researchers sometimes refer to this as “vibe hacking.” That phrase is a variation on vibe coding, which is jargon referring to having AI write usable code, even if the user has no idea how or why the code works, and they’re happy to work around or ignore AI-generated bugs or glitches along the way (see: Vibe Hacking Not Yet Possible).
Anthropic said it’s banned the accounts being used by the attackers and shared technical data with authorities, and added better safeguards to detect and block such activity. These include “a tailored classifier – an automated screening tool” as well as “a new detection method to help us discover activity like this as quickly as possible in the future,” it said in a blog post.
Extortionists aren’t the only ones attempting to turn AI tools to their criminal advantage.
Anthropic’s latest threat report also catalogs a bevy of other misuse cases, including North Korean hackers using Claude to fabricate professional resumes and help them ace coding tests, as part of long-running efforts by Pyongyang to infiltrate Western businesses and steal cryptocurrency (see: North Korea’s Hidden IT Workforce Exposed in New Report).
AI tools are also being tasked to write ransomware. A U.K.-based cybercrime operation that Anthropic tracks as GTG-5004 used Claude Code to develop ransomware variants offering advanced evasion, encryption and anti-recovery features, and sold them through darknet forums for $400 to $1,200 apiece, it said.
Anthropic didn’t detail how this ransomware may have performed in real-world attacks, or the extent to which it’s being detected and blocked by existing security tools.
The company said it’s also blocked Chinese attackers from using Claude to refine cyber operations targeting Vietnam, disrupted a Russian-speaking developer in pursuit of building more stealthy malware and cracked down on a Spanish-speaking actor who advertised a Claud-enabled service for validating stolen credit cards.
While the success of these efforts remains difficult to measure, many attackers are clearly experimenting with AI-enabling their crime sprees.