In the right hands, artificial intelligence and machine learning can enrich our cyber defenses. In the wrong hands, they can create significant harm.

Andrey Shklyarov & Dmitry Vyrostkov, Chief Compliance Officer, DataArt / Chief Software Architect, Security Services, DataArt

July 19, 2021

5 Min Read
Picture of a digital face
peshkova via Adobe Stock

Artificial intelligence (AI) and machine learning (ML) are now part of our everyday lives, and this includes cybersecurity. In the right hands, AI/ML can identify vulnerabilities and reduce incident response time. But in cybercriminals' hands, they can create significant harm.

Here are seven positive and seven negative ways AI/ML is impacting cybersecurity.

7 Positive Impacts of AI/ML in Cybersecurity

  • Fraud and Anomaly Detection: This is the most common way AI tools are coming to the rescue in cybersecurity. Composite AI fraud-detection engines are showing outstanding results in recognizing complicated scam patterns. Fraud detection systems' advanced analytics dashboards provide comprehensive details about incidents. This is an extremely important area within the general field of anomaly detection.

  • Email Spam Filters: Defensive rules filter out messages with suspect words to identify dangerous email. Additionally, spam filters protect email users and reduce the time it takes to go through unwanted correspondence.

  • Botnet Detection: Supervised and unsupervised ML algorithms not only facilitate detection but also prevent sophisticated bot attacks. They also help identify user behavior patterns to discern undetected attacks with an extremely low false-positive rate.

  • Vulnerability Management: It can be difficult to manage vulnerabilities (manually or with technology tools), but AI systems make it easier. AI tools look for potential vulnerabilities by analyzing baseline user behavior, endpoints, servers, and even discussions on the Dark Web to identify code vulnerabilities and predict attacks.

  • Anti-malware: AI helps antivirus software detect good and bad files, making it possible to identify new forms of malware even if it's never been seen before. Although complete replacement of traditional techniques with AI-based ones can speed detection, it also increases false positives. Combining traditional methods and AI can detect 100% of malware.

  • Data-Leak Prevention: AI helps identify specific data types in text and non-text documents. Trainable classifiers can be taught to detect different sensitive information types. These AI approaches can search data in images, voice records, or video using appropriate recognition algorithms.

  • SIEM and SOAR: ML can use security information and event management (SIEM) and security orchestration, automation, and response (SOAR) tools to improve data automation and intelligence gathering, detecting suspicious behavior patterns, and automating the response depending on the input.

AI/MI is used in network traffic analysis, intrusion detection systems, intrusion prevention systems, secure access service edge, user and entity behavior analytics, and most technology domains described in Gartner's Impact Radar for Security. In fact, it's hard to imagine a modern security tool without some kind of AI/ML magic in it.

7 Negative Impacts of AI/ML in Cybersecurity

  • Data Gathering: Through social engineering and other techniques, ML is used for better victim profiling, and cybercriminals leverage this information to accelerate attacks. For example, in 2018, WordPress websites experienced massive ML-based botnet infections that granted hackers access to users' personal information.

  • Ransomware: Ransomware is experiencing an unfortunate renaissance. Examples of criminal success stories are numerous; one of the nastiest incidents led to Colonial Pipeline's six-day shutdown and $4.4 million ransom payment.

  • Spam, Phishing, and Spear-Phishing: ML algorithms can create fake messages that look like real ones and aim to steal user credentials. In a Black Hat presentation, John Seymour and Philip Tully detailed how an ML algorithm produced viral tweets with fake phishing links that were four times more effective than a human-created phishing message.

  • Deepfakes: In voice phishing, scammers use ML-generated deepfake audio technology to create more successful attacks. Modern algorithms such as Baidu's "Deep Voice" require only a few seconds of someone's voice to reproduce their speech, accents, and tones.

  • Malware: ML can hide malware that keeps track of node and endpoint behavior and builds patterns mimicking legitimate network traffic on a victim's network. It can also incorporate a self-destructive mechanism in malware that amplifies the speed of an attack. Algorithms are trained to extract data faster than a human could, making it much harder to prevent.

  • Passwords and CAPTCHAs: Neural network-powered software claims to easily break human-recognition systems. ML enables cybercriminals to analyze vast password data sets to target password guesses better. For example, PassGAN uses an ML algorithm to guess passwords more accurately than popular password-cracking tools using traditional techniques.

  • Attacking AI/ML Itself: Abusing algorithms that work at the core of healthcare, military, and other high-value sectors could lead to disaster. Berryville Institute of Machine Learning's Architectural Risk Analysis of Machine Learning Systems helps analyze taxonomies of known attacks on ML and performs an architectural risk analysis of ML algorithms. Security engineers must learn how to secure ML algorithms at every stage of their life cycle.

It is easy to understand why AI/ML is gaining so much attention. The only way to battle devious cyberattacks is to use AI's potential for defense. The corporate world must notice how powerful ML can be when it comes to detecting anomalies (for example, in traffic patterns or human errors). With proper countermeasures, possible damage can be prevented or drastically reduced.

Overall, AI/ML has huge value for protecting against cyber threats. Some governments and companies are using or discussing using AI/ML to fight cybercriminals. While the privacy and ethical concerns around AI/ML are legitimate, governments must ensure that AI/ML regulations won't prevent businesses from using AI/ML for protection. Because, as we all know, cybercriminals do not follow regulations.

DataArt's Vadim Chakryan, Information Security Officer, and Eugene Kolker, Executive Vice President, Global Enterprise Services & Co-Director, AI/ML Center of Excellence, also contributed to this article.

About the Author(s)

Andrey Shklyarov & Dmitry Vyrostkov

Chief Compliance Officer, DataArt / Chief Software Architect, Security Services, DataArt

Andrey Shklyarov

Andrey joined DataArt in 2016 as Chief Compliance Officer. He has more than 25 years of experience in the IT industry. He began his career as a software developer and has played many roles. He has experience in managing projects, managing programs in the medical device industry, building quality and security management systems, overseeing agile adoption, managing a software delivery center, and running a corporate compliance program. Andrey holds a master’s degree in computer science from the Kharkiv National University of Radio Electronics.

Dmitry Vyrostkov

Dmitry Vyrostkov joined DataArt in 2006 as a software developer/team leader, contributing to projects with extensive security requirements. Dmitry has also worked as a technical architect, a solution architect, and a subject-matter expert in numerous enterprise projects, designing and building complex solutions in finance, healthcare, and travel & hospitality sectors.

In 2012, Dmitry established DataArt's Security Competence, a team of security experts that consult with clients and help DataArt’s development teams implement best security practices. Dmitry promotes the group’s services to internal and external audiences. In 2019, the group generated over $1 million in security services revenue. Dmitry also coordinates sales activities, projects, and resources, and oversees service quality and deliverables.

Prior to joining DataArt, Dmitry worked as a developer and team leader at Relex, one of the leading software development companies in Voronezh. Dmitry holds an MS in Applied Mathematics, Informatics & Mechanics from Voronezh State University.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights