Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
DeepSeek Comes Very Close to Producing a Keylogger and Ransomware

Security researchers used the Chinese DeepSeek-R1 artificial intelligence reasoning model to come closer to developing ransomware variants and keyloggers with evasion capabilities.
See Also: Capturing the cybersecurity dividend
Researchers at Tenable cautioned the findings don’t necessarily mark a new era of malware. DeepSeek R-1 can “create the basic structure for malware” but needs prompt engineering and its output requires code editing. Still, even basic malware coding can help “someone with no prior experience in writing malicious code” and the “ability to quickly familiarize themselves with the relevant concepts,” wrote Nick Miles, a Tenable staff research engineer.
DeepSeek initially balked at writing malware, but was willing to do so after being assured that generating malicious code would be for “educational purposes only.”
The R1 chain-of-thought process also showcased that the model was aware that writing a hook procedure function – the most obvious way to intercept keystrokes on a Windows machine – would be detected by antivirus software. The model attempted to overcome this challenge by trying to “balance the usefulness of hooks and evading detection,” Miles said. It ultimately opted to sue SetWindowsHookEx
and log keystrokes in a hidden file.
“After a bit of back and forth with DeepSeek, it produced code for a keylogger that was somewhat buggy. We had to manually correct several issues with its code,” Miles said.
The outcome was four “show-stopping errors away from a fully functional keylogger,” he added.
The researcher similarly prompted R1 to generate a ransomware code, which resulted in the model warning the user about the legal and ethical issues tied to malicious code generation.
Reassuring DeepSeek again of the user’s good intentions were enough to goad the model generating ransomware samples. All the samples needed ot be manually edited in order to compile, “but we were able to get a few of them working.”
Miles said that, based on his analysis, Tenable believes “that DeepSeek is likely to fuel further development of malicious AI-generated code by cybercriminals in the near future,” Miles added/