Adversarial AI is the attacker's latest weapon; deep learning delivers the most resilient form of defense against these threats.

August 25, 2021

4 Min Read

Cybercriminals are continuously ramping up their efforts with an ever-growing armory of new and improved weapons. One of the latest developments is adversarial artificial intelligence (AI). AI has been widely integrated into our everyday lives — ranging from domestic appliances to medical equipment — but the power of AI makes it extremely appealing to threat actors, and the recent emergence of adversarial AI proves it. Armed with tailored capabilities to bypass the victim's own machine learning (ML) defenses, adversarial AI is the cybercriminal's latest secret weapon — and one with potentially devastating consequences.

Silent Sabotage
Today's ML-based cybersecurity solutions train on prelabeled datasets to distinguish malicious threats from benign, allowing the independent monitoring of networks for intrusions and incoming threats. This provides organizations with the ability to defend against incoming cyberattacks without having to allocate additional human resources. One of the latest emerging threats used by advanced threat actors is the use of their own adversarial AI capabilities to trick the defenders' ML-based security solutions into thinking their incoming attacks are benign and allowing them free access and movement, virtually undetected. These types of attacks can be extremely difficult to detect, let alone prevent.

It's incredibly difficult to stop a threat that can remain undetected until the damage is done. This technique has advanced well beyond the traditional smash-and-grab approach. Rather, the goal is to learn the behavior and decision boundary of a ML system and ultimately craft a successful bypass. For businesses to detect a breach, it requires an enormous amount of effort to sift through threat data for signs of compromise. To make matters worse, few companies have resources to spare for this time-intensive task, allowing adversarial AI to often go unnoticed until it's too late. One recent example of this threat involved hackers using Stochastic Gradient Descent algorithms, a method traditionally applied when training machine learning models. AI-aware attackers are now using similar or adapted algorithms to produce deceptive data that is mislabeled by the defenders' ML-based security models.

Fighting Back
So, how do businesses defend against something designed to deceive their best protection? Replacing the weak link is the best place to start. Given that some cybercriminals have adopted and advanced ML, a new form of defense is required. The answer: deep learning, which is designed to go that extra step, beyond traditional ML capabilities. End-to-end deep learning models train on raw data, not on human-understood, easy-to-mutate engineered features. Accurate deep learning implementation (including ours) still relies on labeled datasets and runs in supervised learning, but robust deep neural networks are more complex computationally. These two advantages make deep learning models a much more difficult target to bypass and craft adversarial samples for.

Deep learning is not a silver bullet stand-alone, and there are a significant number of threats that businesses need to be aware of, but deep learning provides advanced defenses when added to an existing security stack. Before implementing this technology, organizations need to fully understand how it works so the system runs as effectively as possible. Getting key personnel accurate, detailed information on how deep learning works and is applied ahead of its deployment is vital to its eventual success.

Paving a Path Forward
The modern cyberattacker is one of the biggest threats businesses have face today. We believe that with time more attackers will be armed with the AI know-how to quickly and effectively attack and damage unprepared organizations. Additionally, it's only a matter of time before adversarial AI algorithms are developed into frameworks that can be mass marketed on the Dark Web. Once this happens, malicious AI will be much more widely accessible, not only for nation-state and tier-1 threat groups but also for common cybercriminals. It also raises the question: "How long until deep learning is harnessed by threat actors?" Given the stakes, it's extremely important that we address the battle of the AIs now, before the situation gets out of control.

Even though adversarial AI is on the rise, traditional ML implementations are still a popular security model for thousands of companies that will soon become sitting ducks. Tackling this challenge by adopting a prevention-based solution that leverages deep learning will finally give organizations a fighting chance against this silent weapon. Not only will it prove more resilient against adversarial AI attacks, it also requires minimal human input beyond the initial setup, leaving security teams more time to focus on other areas of the security stack and better prepare themselves for future threats.

About the Author

Shimon_Oren.png

Shimon N. Oren is Deep Instinct's VP of Research & Deep Learning. He has two decades of experience in cybersecurity research and operations, both offensive and defensive. Prior to joining Deep Instinct in 2016, Oren served as an officer in the Israeli Defense Force's elite cyber unit 8200 for 15 years. Oren's background includes a wide range of cybersecurity and research positions where he managed multifunctional teams of hackers, researchers, and engineers, while also working extensively with a variety of industry, defense, and intelligence partners and agencies in North America and Europe.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights