Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

// // //
7/19/2021
10:00 AM
Connect Directly
LinkedIn
RSS
E-Mail vvv

7 Ways AI and ML Are Helping and Hurting Cybersecurity

In the right hands, artificial intelligence and machine learning can enrich our cyber defenses. In the wrong hands, they can create significant harm.

Artificial intelligence (AI) and machine learning (ML) are now part of our everyday lives, and this includes cybersecurity. In the right hands, AI/ML can identify vulnerabilities and reduce incident response time. But in cybercriminals' hands, they can create significant harm.

Related Content:

Deepfakes Are on the Rise, but Don't Panic Just Yet

Special Report: Building the SOC of the Future

New From The Edge: Security 101: The 'PrintNightmare' Flaw

Here are seven positive and seven negative ways AI/ML is impacting cybersecurity. 

7 Positive Impacts of AI/ML in Cybersecurity

  • Fraud and Anomaly Detection: This is the most common way AI tools are coming to the rescue in cybersecurity. Composite AI fraud-detection engines are showing outstanding results in recognizing complicated scam patterns. Fraud detection systems' advanced analytics dashboards provide comprehensive details about incidents. This is an extremely important area within the general field of anomaly detection.
  • Email Spam Filters: Defensive rules filter out messages with suspect words to identify dangerous email. Additionally, spam filters protect email users and reduce the time it takes to go through unwanted correspondence.
  • Botnet Detection: Supervised and unsupervised ML algorithms not only facilitate detection but also prevent sophisticated bot attacks. They also help identify user behavior patterns to discern undetected attacks with an extremely low false-positive rate.
  • Vulnerability Management: It can be difficult to manage vulnerabilities (manually or with technology tools), but AI systems make it easier. AI tools look for potential vulnerabilities by analyzing baseline user behavior, endpoints, servers, and even discussions on the Dark Web to identify code vulnerabilities and predict attacks.
  • Anti-malware: AI helps antivirus software detect good and bad files, making it possible to identify new forms of malware even if it's never been seen before. Although complete replacement of traditional techniques with AI-based ones can speed detection, it also increases false positives. Combining traditional methods and AI can detect 100% of malware.
  • Data-Leak Prevention: AI helps identify specific data types in text and non-text documents. Trainable classifiers can be taught to detect different sensitive information types. These AI approaches can search data in images, voice records, or video using appropriate recognition algorithms.
  • SIEM and SOAR: ML can use security information and event management (SIEM) and security orchestration, automation, and response (SOAR) tools to improve data automation and intelligence gathering, detecting suspicious behavior patterns, and automating the response depending on the input.

AI/MI is used in network traffic analysis, intrusion detection systems, intrusion prevention systems, secure access service edge, user and entity behavior analytics, and most technology domains described in Gartner's Impact Radar for Security. In fact, it's hard to imagine a modern security tool without some kind of AI/ML magic in it.

7 Negative Impacts of AI/ML in Cybersecurity

  • Data Gathering: Through social engineering and other techniques, ML is used for better victim profiling, and cybercriminals leverage this information to accelerate attacks. For example, in 2018, WordPress websites experienced massive ML-based botnet infections that granted hackers access to users' personal information.
  • Ransomware: Ransomware is experiencing an unfortunate renaissance. Examples of criminal success stories are numerous; one of the nastiest incidents led to Colonial Pipeline's six-day shutdown and $4.4 million ransom payment.
  • Spam, Phishing, and Spear-Phishing: ML algorithms can create fake messages that look like real ones and aim to steal user credentials. In a Black Hat presentation, John Seymour and Philip Tully detailed how an ML algorithm produced viral tweets with fake phishing links that were four times more effective than a human-created phishing message.
  • Deepfakes: In voice phishing, scammers use ML-generated deepfake audio technology to create more successful attacks. Modern algorithms such as Baidu's "Deep Voice" require only a few seconds of someone's voice to reproduce their speech, accents, and tones.
  • Malware: ML can hide malware that keeps track of node and endpoint behavior and builds patterns mimicking legitimate network traffic on a victim's network. It can also incorporate a self-destructive mechanism in malware that amplifies the speed of an attack. Algorithms are trained to extract data faster than a human could, making it much harder to prevent.
  • Passwords and CAPTCHAs: Neural network-powered software claims to easily break human-recognition systems. ML enables cybercriminals to analyze vast password data sets to target password guesses better. For example, PassGAN uses an ML algorithm to guess passwords more accurately than popular password-cracking tools using traditional techniques.
  • Attacking AI/ML Itself: Abusing algorithms that work at the core of healthcare, military, and other high-value sectors could lead to disaster. Berryville Institute of Machine Learning's Architectural Risk Analysis of Machine Learning Systems helps analyze taxonomies of known attacks on ML and performs an architectural risk analysis of ML algorithms. Security engineers must learn how to secure ML algorithms at every stage of their life cycle.

It is easy to understand why AI/ML is gaining so much attention. The only way to battle devious cyberattacks is to use AI's potential for defense. The corporate world must notice how powerful ML can be when it comes to detecting anomalies (for example, in traffic patterns or human errors). With proper countermeasures, possible damage can be prevented or drastically reduced.

Overall, AI/ML has huge value for protecting against cyber threats. Some governments and companies are using or discussing using AI/ML to fight cybercriminals. While the privacy and ethical concerns around AI/ML are legitimate, governments must ensure that AI/ML regulations won't prevent businesses from using AI/ML for protection. Because, as we all know, cybercriminals do not follow regulations.


DataArt's Vadim Chakryan, Information Security Officer, and Eugene Kolker, Executive Vice President, Global Enterprise Services & Co-Director, AI/ML Center of Excellence, also contributed to this article.

Andrey joined DataArt in 2016 as Chief Compliance Officer. He has more than 25 years of experience in the IT industry. He began his career as a software developer and has played many roles. He has experience in managing projects, managing programs in the medical device ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
The 10 Most Impactful Types of Vulnerabilities for Enterprises Today
Managing system vulnerabilities is one of the old est - and most frustrating - security challenges that enterprise defenders face. Every software application and hardware device ships with intrinsic flaws - flaws that, if critical enough, attackers can exploit from anywhere in the world. It's crucial that defenders take stock of what areas of the tech stack have the most emerging, and critical, vulnerabilities they must manage. It's not just zero day vulnerabilities. Consider that CISA's Known Exploited Vulnerabilities (KEV) catalog lists vulnerabilitlies in widely used applications that are "actively exploited," and most of them are flaws that were discovered several years ago and have been fixed. There are also emerging vulnerabilities in 5G networks, cloud infrastructure, Edge applications, and firmwares to consider.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2023-1142
PUBLISHED: 2023-03-27
In Delta Electronics InfraSuite Device Master versions prior to 1.0.5, an attacker could use URL decoding to retrieve system files, credentials, and bypass authentication resulting in privilege escalation.
CVE-2023-1143
PUBLISHED: 2023-03-27
In Delta Electronics InfraSuite Device Master versions prior to 1.0.5, an attacker could use Lua scripts, which could allow an attacker to remotely execute arbitrary code.
CVE-2023-1144
PUBLISHED: 2023-03-27
Delta Electronics InfraSuite Device Master versions prior to 1.0.5 contains an improper access control vulnerability in which an attacker can use the Device-Gateway service and bypass authorization, which could result in privilege escalation.
CVE-2023-1145
PUBLISHED: 2023-03-27
Delta Electronics InfraSuite Device Master versions prior to 1.0.5 are affected by a deserialization vulnerability targeting the Device-DataCollect service, which could allow deserialization of requests prior to authentication, resulting in remote code execution.
CVE-2023-1655
PUBLISHED: 2023-03-27
Heap-based Buffer Overflow in GitHub repository gpac/gpac prior to 2.4.0.