Malicious actors are using OpenAI’s ChatGPT to build malware, dark web sites and other tools to enact cyber attacks, research by threat intelligence company Check Point Research has found.
While the artificial intelligence (AI)-powered chatbot has put restrictions on its use, including using it to create malware, posts on a dark web hacking forum have revealed that it can still be used to do so. One user alludes to this by saying that “there’s still work around”, while another said “the key to getting it to create what you want is by specifying what the program should do and what steps should be taken, consider it like writing pseudo-code for your comp[uter] sci[ence] class.”
Screenshot provided by Check Point Research
Using this method, the user said they had been able to create a “python file stealer that searches for common file types” that can self-delete after the files are uploaded or if any errors occur while the program is running, “therefore removing any evidence”.
Screenshot provided by Check Point Research
Another user described being able to use ChatGPT to create a dark web marketplace script. Dark web marketplaces can be used in a number of different ways, including selling personal information obtained in data breaches, selling illegally obtained payment card information or selling cyber crime-as-a-service products.
Many more users have posted to the forum, toting ChatGPT as a way to “make money”, with claims that it can make users more than US$1,000 per day. According to Forbes, these methods include using ChatGPT’s abilities to pose as young women to enact social engineering attacks on vulnerable targets.
Screenshot provided by Check Point Research
Cyber security experts told Cyber Security Hub that they predicted a top threat to cyber security in 2023 would be crime-as-a-service; platforms where malicious actors can offer their services to those who would otherwise be unable to carry out cyber attacks. With ChatGPT able to expedite the process of creating malware for free, this could make crime-as-a-service even more lucrative for cyber criminals.
Adam Levin, cyber security expert and host of cybercrime podcast What the Hack with Adam Levin, explains that malicious actors being able to create “increasingly sophisticated software” and sell this software as-a-service is dangerous as it “allows anyone, regardless how tech savvy, to conduct phishing, ransomware, distributed denial of service and other cyber attacks”.
Levin predicts that throughout 2023, “criminal software enterprises will continue to threaten enterprises of any size”. Furthermore he says cyber-crime syndicates behind current as-a-service platforms are set to grow over the next 12 months as “they can make more money enabling entry-level cyber criminals to commit crimes than they can directly targeting victims and with less risk”.
However, Levin says that these types of attacks can be mitigated with the use of multifactor authentication, the implementation of zero-trust architecture and regular cyber security training and penetration testing.