Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
WormGPT 4 Sells for $50 Monthly, While KawaiiGPT Goes Open Source

The cybercrime-as-a-service model has a new product line, with malicious large language models built without ethical guardrails selling on Telegram for $50 monthly or distributed free on GitHub. Sellers tout benefits such as functional ransomware code with AES-256 encryption and Tor-based exfiltration within 30 seconds or Python script for SSH lateral movement with remote shell access in under a minute.
See Also: A CISO’s Perspective on Scaling GenAI Securely
Security researchers at Palo Alto Networks’ Unit 42 analyzed two such tools, WormGPT 4 and KawaiiGPT, to show how purpose-built offensive LLMs are shifting from theoretical threat to commercialized reality, complete with subscription tiers, active user communities and functional attack code generation.
Threat actors are adopting artificial intelligence to reduce time spent on attack vectors and improve the quality of attacks, Andy Piazza, senior director of threat research at Unit 42, told Information Security Media Group. Its application ranges from generating spear-phishing emails to lure images and dynamic payload generation executed in real time.
Unit 42 tested both models. WormGPT 4 instantly generated a functional PowerShell script for PDF encryption with AES-256, configurable file extensions and search paths defaulting to C drive, plus optional Tor-based data exfiltration. The model produces ransom notes specifying military-grade encryption and 72-hour payment deadlines with doubling fees.
KawaiiGPT, despite interface language designed to appear casual, demonstrates parallel capabilities. Prompts produce credential-harvesting emails with professional formatting and subject lines such as “Urgent: Verify Your Account Information.” Lateral movement requests generate Python scripts using paramiko for SSH authentication and remote shell access. Data exfiltration prompts generate code that goes through every folder and subfolder to find EML files – the standard format for saved emails – and then uses Python’s built-in email-sending tool smtplib to forward those files to an attacker.
WormGPT 4 runs commercial operations with clear pricing and dedicated marketing, while KawaiiGPT builds its user base through free distribution and community engagement. “They target two different types of users, and none of them are necessarily riskier than the other,” he said, adding that the company expects both toolsets to find more users.
The original WormGPT emerged in July 2023, reportedly built on the GPT-J 6B open-source model and fine-tuned with datasets containing malware code, exploit write-ups and phishing templates. The creator shut down the project mid-2023, but successors proliferated. WormGPT 4 began sales campaigns around September last year on underground forums including DarknetArmy (see: Hackers Developing Malicious LLMs After WormGPT Falls Flat).
KawaiiGPT appeared in July this year, and is now at version 2.5 with over 500 registered users and an active 180-member Telegram community.
“It’s important to understand that generative AI does not create anything from scratch. Rather, it’s just a complicated copy/paste capability that’s trained on known datasets. That means that the code it generates is based on known samples that are often easily detected by current security stacks,” he said. “On the other hand, humans have the capability to identify new obfuscation techniques and attack paths.”
The limitation matters less than it might appear. The volume and speed shift the defender calculus. Organizations built detection strategies around poor grammar and sloppy code as indicators, but GenAI is erasing those indicators. The linguistic precision and code fluency produce output that passes automated filters and human review at rates exceeding traditional attacks.
Pricing for WormGPT runs $50 monthly, $175 annually or $220 for lifetime access. KawaiiGPT takes the open-source route, available on GitHub with under five minutes to configuration on standard Linux systems. Both models remove ethical guardrails present in commercial LLMs.
