WormGPT, a new generative AI tool similar to ChatGPT, is being used by cybercriminals to launch sophisticated business email and phishing attacks.
The cybercrime tool has reportedly been advertised in underground forums as a malicious-focused alternative to OpenAI’s ChatGPT.
According to email security provider SlashNext, the developer of the application is selling access to it in popular hacker forums.
Exposing the application in a blog post, SlashNext said: “We see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes.”
“Our team recently gained access to a tool known as ‘WormGPT’ through a prominent online forum that’s often associated with cybercrime,” SlashNext wrote.
“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.”
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataIn screenshots viewed by Verdict and other publications, the application’s developer wrote: “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future.”
“Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home,” he added.
WormGPT is a generative AI module powered by the GPTJ language model, which was reportedly developed in 2021.
It is understood that the language model has been trained on large sets of malicious data – specifically focusing on malware-related data.
According to SoSafe, who gained access to the app, WormGPT was able to “produce an email that was not only remarkably persuasive but also strategically cunning.”
SoSafe, a leading email security solution firm, recently published a survey revealing that “phishing emails written with AI are opened by 78% of people and are not recognised at first glance”.
“Initial studies have shown that AI can already write better phishing emails than humans,” Dr Niklas Hellemann, CEO and Co-Founder of SoSafe told Verdict.
“Our data illustrates the consequences, with 1 in 5 people already falling for AI-created phishing attacks,” he added.
Helleman said that it’s likely that cybercriminals will be able to make more customised solutions like WormGPT as AI continues to evolve.
“This will take personalisation scaling to a new level, making these attacks even more dangerous,” Helleman said.
The SoSafe co-founder said that companies need to keep up with the evolution of AI and “help people raise their awareness of cyber threats and the impact of new technologies to detect and report attacks.”