Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

05:30 PM
Connect Directly

AI and Machine Learning: Breaking Down Buzzwords

Security experts explain two of today's trendiest technologies to explain what they mean and where you need them.

Artificial Intelligence and machine learning are marketed as security game-changers. You see them all the time on systems and services, promising to catch threats and strengthen defense.

"One of the problems in our industry is people tend to throw around buzzwords in the hopes of differentiating their marketing," says Jon Oltsik, senior principal analyst at ESG. "But all it does it confuse the market."

The two technologies have legitimate purpose and potential but are used so often and so interchangeably, it can be hard to make sense of them. What's the difference between them? Where should they be used? And how should you shop for them to get real value?

AI vs. ML: What They Really Mean

Machine learning is a segment of artificial intelligence, explains Roselle Safran, president at Rosint Labs. Artificial intelligence, a more general concept that has been around for decades, describes machines that think like people. One of its many applications is machine learning, in which a machine looks at tons of data and from that data, can learn what something is.

"There are products that have some pretty impressive capabilities but they're not necessarily capabilities where the product is learning," she explains. "It's often useful from a marketing perspective to slap the label on because then it checks the buzzwords off."

Machine learning has two components, Safran continues. One is a massive volume of training data; the other is a "feedback loop" that informs decisions. Based on the product, a machine learning system will look at volumes of data to determine whether its decisions are correct.

"Large organizations are starting to experiment with artificial intelligence and machine learning," says Oltsik. "They don't have a deep understanding of the concepts and models, but nor do they want to," he says. "What they care about is how effective it is, and does it improve their existing technology and processes."

However, he continues, security leaders should know enough to determine where these technologies can be applied and how to choose one system over another.

Where They Fit in Your Security Strategy

"There are a couple of unique problem sets in security that are right for machine learning, and right for different kinds of solutions," explains Ryan LaSalle, global managing director for Growth & Strategy at Accenture Security. He describes security as a "graph problem" because it's a way of storing lots of data and everything is relationship-driven.

People have a hard time visualizing in a graph, he continues, but machines excel because they thrive on large volumes of data. The key is to pick up scenarios where the machine has an advantage over people; for example, observing human behaviors and detecting anomalies.

"User behavior analysis is a big one that went from traditional analytics to more machine learning-driven," he says.

Machines can view employee behaviors across multiple points in the environment and automate access privileges, something "most enterprises are terrible at," he adds. Business managers often "rubber-stamp" the process of employee access and won't consider the same level of detail when considering who should be able to access what.

In the near term, most security teams will use machine learning for detection and response, though there are protective applications as well, says Safran. In the future there will be applications on the strategic and architectural levels, but we're not there yet.

"For now I see most of the activity is going to be operationally focused and tactical in nature: detecting malware, phishing attacks, detecting unusual behavior that could be indicative of insider threats," she explains. When a system detects a threat that needs to be investigated, a machine can help by providing next steps for the response process.

Machines Won't Replace (Most of) Your Colleagues

There are several misconceptions about artificial intelligence and machine learning, says Oltsik. One of them is the idea that machines will eventually be a substitute for humans.

"Across the board, at this point we're very, very far from a situation where machines are going to do all the work and the security team can go home," Safran says. "All of the machine learning apps for the next few years will focus on enhancing the work of the security team and making their operations more efficient and effective."

However, machines can do the same work as tier one analysts, freeing up limited security talent to focus on more advanced work, she points out. Most tier-one tasks involve information gathering and technical duties. These are decisions that can be calculated. Security teams can leverage machine learning to automate "busywork" and train up their employees.

How to Shop Securely

"You need to look at the threat scenarios your business cares about," says LaSalle. Test the outcomes of the system and compare where you are today, with what you're trying to achieve.

Oltsik also points to performance as something to keep in mind. If an artificial intelligence tool collects data on-site and puts it in the cloud for processing, it will cause latency. What kind of impact will that have on your organization?

Data reporting is another factor to consider, he continues. "All this data interpretation and analysis is only as useful as it comes back and provides information to a human. In the history of technology there have been good reports and bad reports, good visualization and bad."

Safran recommends asking the vendor about the training data they use. If there is no training data, changes are the tool doesn't actually have machine learning capabilities. If there is training data, you need to know if it's specific to the business or their whole customer base.

You also need to understand how the model works, as well as the feedback loop informing it.

"It's a challenging question for many organizations, but having insight into how it works under the hood gives a better perspective on what it's capable of doing and how it could be a benefit," she explains.

Related Content:




Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Strategist
3/29/2018 | 10:14:40 AM
Well that is pretty interesting, but hard for understanding( Maybe you can add any exaamples or samples of similar works? 
User Rank: Strategist
2/18/2018 | 2:43:49 AM
AI in Security
For some additional insights into where AI and ML help in security, have a look at this other Darkreading post of mine: AI in Cyber Seucirty - Where we stnad and where we need to go
7 Old IT Things Every New InfoSec Pro Should Know
Joan Goodchild, Staff Editor,  4/20/2021
Cloud-Native Businesses Struggle With Security
Robert Lemos, Contributing Writer,  5/6/2021
Defending Against Web Scraping Attacks
Rob Simon, Principal Security Consultant at TrustedSec,  5/7/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-05-13
A Denial of Service due to Improper Input Validation vulnerability in the Management Console component of BlackBerry UEM version(s) 12.13.1 QF2 and earlier and 12.12.1a QF6 and earlier could allow an attacker to potentially to prevent any new user connections.
PUBLISHED: 2021-05-13
A Remote Code Execution vulnerability in the Management Console component of BlackBerry UEM version(s) 12.13.1 QF2 and earlier and 12.12.1a QF6 and earlier could allow an attacker to potentially cause the spreadsheet application to run commands on the victim’s local machine with t...
PUBLISHED: 2021-05-13
An Information Disclosure vulnerability in the Management Console component of BlackBerry UEM version(s) 12.13.1 QF2 and earlier and 12.12.1a QF6 and earlier could allow an attacker to potentially gain access to a victim's web history.
PUBLISHED: 2021-05-13
Specific versions of the MongoDB C# Driver may erroneously publish events containing authentication-related data to a command listener configured by an application. The published events may contain security-sensitive data when commands such as "saslStart", "saslContinue", "i...
PUBLISHED: 2021-05-13
SchedMD Slurm before 20.02.7 and 20.03.x through 20.11.x before 20.11.7 allows remote code execution as SlurmUser because use of a PrologSlurmctld or EpilogSlurmctld script leads to environment mishandling.