Artificial Intelligence & Machine Learning
,
Cyberwarfare / Nation-State Attacks
,
Fraud Management & Cybercrime
Who Knew APT Hackers Liked Emojis So Much?

All the nation-state hackers are vibe coding.
See Also: The Power of Peer-to-Peer Communities
Not all, exactly – but enough so that the trend of using generative artificial intelligence to slap together functional code has become visibly adopted even among hackers who have government backing.
One proponent of the vibeware approach appears to be a Pakistani threat group tracked as APT36, aka Transparent Tribe, which regularly targets Indian government entities and diplomats, says a Thursday report from Bitdefender.
The firm’s attribution of these strains of vibe-coded malware to APT36 isn’t iron-clad, but is based on observing well-known APT36 tools, tactics and procedures. Pakistani hackers used vibeware as a “hybrid” fallback for well-known tools such as the open-source Havoc framework for command and control and a shellcode loader called Warcode.
The group’s vibeware won’t win any coding awards. It’s not pretty. It doesn’t target any zero-day vulnerabilities or known flaws in innovative new ways. Emojis abound. But that’s not the point.
The focus “is on the scalability of low sophisticated attacks,” said Martin Zugec, technical solutions architect at Bitdefender. While the “mediocre” code is essentially “disposable,” the point is that it’s a tool allowing attackers to breach organizations that don’t have robust defenses in place, he told Information Security Media Group.
One specific “strategic advantage” offered by vibeware is that it makes it easy for an coder to take the logic for a language they know and instruct a large language model to generate a version in a niche coding language defenders might not be monitoring, such as Nim, Zig or Crystal. Thanks to having digested numerous software development kit manuals, LLMs excel at ensuring this vibeware can work with leading products and services.
The vibeware approach allows for polyglot malware to be generated at scale. “Let’s say that I have five implants, I just throw all of them at you and some of them are going to be automatically blocked. But let’s say that the Crystal one will get to you – because it’s an unusual one – if you don’t follow the security basics,” Zugec said.
The code doesn’t need to be sophisticated, or look pretty, so long as it works. “Everyone expects that APTs are always sophisticated, and they are not,” said Zugec, who characterizes APT36 not as part of the “very elite,” but rather “one of the more junior groups.”
The big defensive takeaway from the rise of vibecoded APT malware is that by doing the endpoint security and security operations center basics, as well as monitoring for the execution of unsigned binaries or unusual API calls – including to Discord, Slack and Google Sheets – organizations will quickly spot the signs of many campaigns, including this one, he said.
LLMs Heart Emojis
APT36 is far from the only nation-state groups relying on AI to produce code.
How do researchers know? One frequent tell that AI has been involved: the code contains emojis.
AI adoption for coding turns out to be a trend among multiple nation-state groups, including ones tied to Russia, China and North Korea (see: State Hackers Turn Google AI Into Attack Acceleration Tool).
Iran – before the United States and Israel unleashed a bombing campaign apparently quieting Tehran nation-state hackers, for now – has also gotten in on the AI-augmented coding action. Google’s Threat Intelligence team last November reported that Tehran-aligned MuddyWater was using Google’s Gemini GenAI tool not just to write phishing emails but also for “developing custom malware including web shells and a Python-based C2 server.”
Building on that finding, cybersecurity firm Group-IB reported on Feb. 20 that a fresh campaign launched by MuddyWater involved a new Rust-based backdoor, which it codenamed Char, that gets controlled using a Telegram bot and which appears to have been built with the help of AI-enabled development tools.
“Specifically, we identified debug strings containing emojis – a trait rarely seen in human-authored code,” Group-IB said.
Threat intelligence firm Ctrl-Alt-Intel on Wednesday reported seeing yet more evidence of AI-assisted code development, after infiltrating infrastructure used by MuddyWater for C2 communications between an attacker-controlled server and client-side malware on a victim’s system. In some of the output generated by the C2 server software, the firm said it saw emojis, which again strongly suggests AI-assisted code development.
Finding evidence of nation-state groups using LLMs “is not surprising, we’ll probably start seeing it more frequently,” a Ctrl-Alt-Intel researcher, who requested anonymity, told ISMG.
“The scripts for the C2 server weren’t sophisticated, but LLMs massively speed up the process of development,” the researcher said.
