The release of GPT-4 marks another major milestone for the advancement of artificial intelligence generative software, but experts believe it may have unleashed a whole new wave of cybercrime.
OpenAI‘s GPT-4 is the next model of the world’s most popular chatbot, ChatGPT. Launched in November last year, the generative chatbot has attracted millions of users.
The new and improved model allows for eight times more words to be processed at once and is able to pass a simulated law bar exam with a score in the top 10% of test takers. The previous model’s score sat at the bottom 10%.
“With enhanced scope to emulate human-level performance in academic scenarios, the refinement of the technology is truly impressive,” Alexey Khitrov, founder of ID R&D, told Verdict, “but these improved capabilities also offer cybercriminals more ways to make their fraud operations more effective and scalable.”
Khitrov, whose biometric company focuses on making security authentication as safe and accessible as possible, believes GPT-4 is able to create more convincing written responses to use fraud purposes.
“This will make it harder for targeted victims to discern whether they are interacting with human or machine, in real-time, and if they are being deceived by sophisticated phishing or vishing attempts,” Khitrov said.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalData“Especially as automated services are popularised and trained AI voice and video becomes more compelling,” he added.
Other experts also believe that GPT-4 has the power to “accelerate cybercrime” and “empower bad actors”.
“ChatGPT4 can empower bad actors, even non-technical ones, with the tools to speed up and validate their activity,” Oded Vanunu, head of products vulnerability research at Check Point Software, told Verdict.
According to Vanunu, whose team spent the 24 hours after the launch of GPT-4 resaerching its cybercrime capabilities, claims it is able to overcome the technical challenges in developing malware.
Vanunu said: “What we’re seeing is that ChatGPT4 can serve both good and bad actors.
“Good actors can use ChatGPT to craft and stitch code that is useful to society; but simultaneously, bad actors can use this AI technology for rapid execution of cybercrime.”
OpenAI has said the company spent six months working on safety features for GPT-4 before its launch but have reminded users it is still able to deliver disinformation.
Verdict has contacted OpenAI for comment.
GlobalData is the parent company of Verdict and its sister publications.