Nearly 71% of AI detectors fail to detect phishing emails generated by AI chatbot software, according to a report by cybersecurity company Egress released today (2 Oct).
The cybersecurity report also discovered that missed voicemails were the most common phishing method, with over 18% of phishing attacks mentioning them. HMRC impersonation, security software impersonation and false Meta/Salesforce ads were also among the top phishing methods used in 2023.
In its report, Egress stated that the public release of ChatGPT in November 2022 spiked concern within the cybersecurity industry, with around 72% of industry leaders stating that they were concerned about the use of large language models (LLMs) within phishing.
Egress explains that LLMs lower the barrier for attacks and can help make phishing emails seem more realistic by avoiding errors and creating requests that seem more commonplace.
Furthermore, LLMs give attackers the ability to create a higher volume of emails instantly, which enables them to create more widespread attacks than ever before.
Instances of social engineering within phishing emails has also steadily risen within the last three years according to Egress. Whilst around 7% of phishing emails relied upon social engineering in 2020, this percentage has risen to 19% of all phishing emails by 2023.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataEgress vice president of Threat Intelligence, Jack Chapman, reiterates this concern around the growing role of LLMs in phishing.
“Without a doubt chatbots or large language models lower the barrier for entry to cybercrime, making it possible to create well-written phishing campaigns and generate malware that less capable coders could not produce alone,” Chapman said.
“Within seconds a chatbot can scrape the internet for open-source information about a chosen target that can be leveraged as a pretext for social engineering campaigns, which are growing increasingly common,” he continued.
Egress’ concerns about LLMs and generative AI were also echoed by cybersecurity company Mandiant in August 2023.
Even since Mandiant’s initial report, the use of LLM technology in social engineering appears to have risen.
Despite this, Egress reminds businesses that it is phishing itself which should be their concern, rather than the LLMs themselves.
“It doesn’t matter whether an attack was written by a human or a bot,” Egress’ report reads, “it only matters whether your defences can detect it.”
According to research analyst GlobalData, the total global market share of AI conversational platforms will be worth $336bn by 2030.
As LLMs become more commonplace within businesses, and the wider internet, their use in social engineering attacks may also become more frequent. The need for AI detecting software will also grow alongside advancements in conversational AI.