Artificial intelligence is a maturing area in cybersecurity, but there are different concerns depending on whether you're a defender or an attacker.

Oleg Brodt, R&D Director of Deutsche Telekom Innovation Labs, Israel, and Chief Innovation Officer for Cyber@Ben-Gurion University

July 12, 2021

5 Min Read
freshidea via Adobe Stock

The purpose of artificial intelligence (AI) is to create intelligent machines. It is used in multiple domains, including finance, manufacturing, logistics, retail, social media, healthcare, and increasingly, cybersecurity.

The current discourse about AI and cybersecurity often confuses the different perspectives, as if the intersection of disciplines is monolithic and one-dimensional. Therefore, we need a common language for discussing the various and disparate intersections of AI and cybersecurity that clarifies the differences. I see three parts to the discussion: AI in the hands of defenders, AI in the hands of attackers, and adversarial AI.

AI in the Hands of Defenders
Machine learning (ML) is a subfield of AI that teaches computers to perform tasks by learning from examples rather than being explicitly programmed. Unsurprisingly, ML and its popular subbranch of deep learning (aka neural networks) are emerging as the main methods of developing cyber-defense solutions: Instead of providing a detection mechanism with predefined malware signatures, we can provide a data set of malicious and benign files and let the computer learn from them.

In simpler terms, the machine learning algorithms analyze the differences and similarities between different samples based on things like their content, how they interact with the operating system, etc., and create a model of what malware files typically look like. Then, every new file is compared against the model and classified as malicious or benign based on probability (typically). Naturally, these probabilistic solutions are far from been perfect, both in terms of failing to identify malicious behavior and in flagging benign behavior as malicious, leading to alert fatigue.

According to the latest M-Trends report, it takes 24 days, on average, to discover a network has been compromised. This is significant improvement from the average of 416 days it took blue teams to realize that an attacker was present in their network. Although we have made progress in defense, I suspect most of it can be attributed to the proliferation of ransomware attacks, where the attackers promptly expose themselves, thereby driving detection time down.

Since attackers remain fast and defenders remain slow, we have no choice but to delegate as many detection tasks to AI-based solutions as possible. Consequently, AI-based models are being integrated into a variety of security solutions, such as intrusion detection systems (IDS), endpoint detection and response (EDR), security information and event management (SIEM) alert prioritization, big data security analytics, and more. The main goal is to improve the performance of existing solutions, automate detection and investigation processes, and most importantly, increase detection speed by handing over tasks previously handled by human analysts.

AI in the Hands of Attackers
While AI-based technologies can improve cyber defense by creating a new generation of intelligent detection systems, they can also be misused in the hands of cyberattackers. A recent paper by Bruce Schneier emphasizes this point.

AI is, and will increasingly be, employed by cyberattackers to lower their costs and improve the effectiveness and stealth of their attacks. In fact, it is easier to justify attackers' use of AI from an economic point of view. While it is quite difficult to measure the ROI of an AI-based cyber-defense system, it is quite straightforward to measure the financial benefits for the attacker.

According to Verizon's "2021 Data Breach Investigations Report," financially motivated hacks continue to be most common — a whopping 90% of all incidents. These attacks have become commoditized, and attackers run their operations just like any other business, where the goal is to increase revenues and reduce costs. Since AI-based technologies can help with the latter, they will increasingly gain hold within cybercrime groups.

We have already witnessed how AI can help with reconnaissance, including automated high-value target discovery and phishing; it can also help with intelligent software fuzzing, yielding faster discovery of vulnerable targets. We can also expect a steep rise in deepfake social engineering attacks powered by AI-based technologies, once the technology is mature enough.

In fact, attackers can analyze every stage of the cyber kill chain and explore integrating dedicated AI-based tools into each one. While defenders' ultimate goal would be complete automation of cyber defense, for attackers, it would be complete automation of attacks.

Adversarial AI
Just like any other technology, AI can itself be vulnerable, leading to additional avenues of exploitation and a new class of cyberattacks.

We have already seen how AI-based anti-spam solutions can be fooled by a single misspelling in an email. We have also witnessed that AI-based image-recognition systems can be fooled by a single pixel change. In fact, research suggests that AI-based systems can be fooled across the board, and the more sophisticated the solution, the easier it is to successfully attack it.

Most alarming, however, is that AI-based cyber defenses are similarly vulnerable. The same attack techniques that work against other AI-based systems can be applied against AI-based malware detectors, intrusion-detection systems, and other security tools. Academic research has already demonstrated how easily most such systems can be bypassed.

In the coming years, I expect we will witness a wave of attacks against AI-based systems. Currently, however, most chief information security officers (CISO) are not paying enough attention to the security of AI-based systems. This must change before we realize — yet again — that we have delegated our most sensitive tasks to the most vulnerable systems.

About the Author(s)

Oleg Brodt

R&D Director of Deutsche Telekom Innovation Labs, Israel, and Chief Innovation Officer for Cyber@Ben-Gurion University

Oleg Brodt serves as the R&D Director of Deutsche Telekom Innovation Labs, Israel. He also serves as the Chief Innovation Officer for Cyber@Ben-Gurion University, an umbrella organization responsible for cybersecurity-related research at Ben Gurion University, Israel. Prior to joining DT Labs and Cyber@BGU, Oleg was an attorney specializing in technology and high tech and represented a broad spectrum of local and international clients.

Oleg is a veteran of an elite technological unit of the Israeli Defense Forces, and he is an author of several cybersecurity patents and research papers. In addition, to CISSP, CCNP, Linux LFCA, and other technological certifications, Oleg holds bachelor's and master's degrees in international business law as well as a degree in business and management from the Inter-Disciplinary Center, Herzliya, Israel. Oleg serves as a member of the Israeli National Committee on Artificial Intelligence, Ethics, and Law, and is a member of the Israel Bar High-Tech committee.


Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights