Previously, AI has been used by cybersecurity experts for threat detection, incident response, predictive threat intelligence, behavioural analytics, and adaptive learning. AI’s ability to analyse large datasets quickly, scan historical data, and provide threat assessments on demand is the reason why it is a cybersecurity expert’s best friend. At least, that’s what we thought.
Since the introduction of generative AI (genAI) tools such as GPT-4 in 2023, the cybersecurity community has been alarmed by the potential misuse and manipulation of the platform by cybercriminals. With a lax AI regulatory landscape in countries like the US, the fear is at its peak.
How will cyber criminals use genAI?
GenAI, trained on large language models (LLMs), allows us to generate text, images, and now videos with short, sharp prompts. It enables average people to utilise the technology for a wide range of educational and work-related benefits. However, an area that is yet to be explored and contained is the overarching threat of cybercriminals using generative AI tools as a weapon of attack.
In the future, hacking groups will likely use LLMs trained on malware to target their attacks more effectively. Elsewhere, genAI will be used to strengthen phishing attacks by eliminating the telltale signs of fake messages such as poor grammar and spelling mistakes. AI-powered cyberattacks can leverage AI or machine learning (ML) algorithms and techniques to automate, accelerate, or enhance various phases of a cyberattack.
With AI, hackers can conduct attacks and adapt techniques to avoid detection and bypass security patterns. AI can also be used to gather immense amounts of real-time data, which allows hackers to generate customised material for their intended purpose.
In 2024, Check Point Research identified a 275% increase in AI-powered malware attacks compared to 2023. This rise in sophisticated malware designed to bypass traditional defences indicates the growing threat of AI-powered cybercrime.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataHow can we strengthen our defence against AI-led attacks?
Learning how to counter AI-led attacks will take time, and cybersecurity vendors and users will face a bumpy ride for the next few years. Using AI to detect genAI-enabled attacks will be the prime tactic in the cybersecurity landscape. To offset any cyber threats, companies will need to integrate AI-backed threat detection software within their IT systems to allow them to respond imminently. The key to solving this large and complex security issue is to fully understand AI’s capabilities and the risk it poses, and then generate a regulatory response at an international level.
Cybersecurity firm Palo Alto Networks believes that “the more we learn about LLMs that allow us to improve our security posture, the better the likelihood we will be ahead of the curve (and our adversaries) in getting the most out of AI”.
We can expect more cybersecurity platforms to focus on improving their abilities to detect genAI-led cyberattacks by acquiring niche AI security defence tools. For example, Cisco’s $28bn acquisition of Splunk will be a catalyst for AI-led cybersecurity M&A deals in 2024 and beyond.
Related Company Profiles
Palo Alto Networks Inc
Cisco Systems Inc
Splunk Inc