Hackers are using AI to attack and exploit computers and devices with increased effect, but adversarial AI pits artificial intelligence against itself in our defence.
“AI has made an enormous amount of progress in the last ten years, tonnes of use cases, medical, driving, object recognition, all of that, but the point is that in none of these areas do we have a true adversary,” says head of AI Rajarshi Gupta, at cybersecurity firm Avast. “Whenever you’re trying to detect something, that thing is not trying to evade you.”
In none of the most common AI use cases, including detecting an object on an image, in self-driving cars, cancer imaging or translation, is there an adversary.
“The tumour is not deliberately trying to hide; the roads are not deliberately trying to move.
“Security is the only area of AI where we have a true adversary, and we have a true adversary who has the economic incentive to try and hide from our algorithms, everywhere.”
Using AI to evade detection
Gupta describes how Avast is using AI to catch cyberattacks in action.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalData“We’re using AI to be much better and much faster at catching these things which are trying to evade us, whatever techniques possible. One of those subsets of techniques is to actually use AI to evade detection, that is the DeepAttack concept.”
One of the most visible forms of AI used to evade detection is the manipulation of video or images.
“But that is not AI versus AI, its AI versus human eyes,” says Gupta. “When you manipulate a video and put Nicolas Cage’s face on something, you’re just trying to fool the human eye who’s looking at a screen, and that’s a much easier hurdle than to fool the algorithms.”
Coming into the cybersecurity arena, Gupta saysL “You can build a video, you can do whatever you want, but if you want to spread it out, the video must be placed on an internet domain, a URL.
“Getting hold of a domain is very important and those domains are very short-lived. They put up a domain, they share something, they get caught, they move. And this is for everything and done at scale, a malware delivery domain will change a hundred times in a day.
“So the challenge here is that they need to find good domain names, so they use domain-generation algorithms or DGA, and we have DGA, detection algorithms, to find those domains.”
“In terms of scale this is huge, every bad person needs to do this,” says Gupta.
Absolute war of internet domains
A malware author could create a domain, such as 18685g.something, explains Gupta, but security can easily spot that this is not a normal domain name. However, second generation domain-generation algorithms learned and improved: they became smarter and started producing dictionary-based domain names such as blueskywater.com. But then the security side’s algorithms became better at detecting them.
So now, says Gupta, they’ve moved on to using “interesting keywords like CNN, bank and things in the domain name to try and evade it”.
“It’s an absolute war of the DGA. The generation algorithm used by the malware authors keeps getting better and our detectors keep getting better.”
Like any competitive situation, adversaries in cybersecurity will improve in line with each other. The first few punches from hackers might get a hit, but with AI, security teams are getting better at blocking.