Threat Intelligence

05:30 PM
Connect Directly

AI and Machine Learning: Breaking Down Buzzwords

Security experts explain two of today's trendiest technologies to explain what they mean and where you need them.

Artificial Intelligence and machine learning are marketed as security game-changers. You see them all the time on systems and services, promising to catch threats and strengthen defense.

"One of the problems in our industry is people tend to throw around buzzwords in the hopes of differentiating their marketing," says Jon Oltsik, senior principal analyst at ESG. "But all it does it confuse the market."

The two technologies have legitimate purpose and potential but are used so often and so interchangeably, it can be hard to make sense of them. What's the difference between them? Where should they be used? And how should you shop for them to get real value?

AI vs. ML: What They Really Mean

Machine learning is a segment of artificial intelligence, explains Roselle Safran, president at Rosint Labs. Artificial intelligence, a more general concept that has been around for decades, describes machines that think like people. One of its many applications is machine learning, in which a machine looks at tons of data and from that data, can learn what something is.

"There are products that have some pretty impressive capabilities but they're not necessarily capabilities where the product is learning," she explains. "It's often useful from a marketing perspective to slap the label on because then it checks the buzzwords off."

Machine learning has two components, Safran continues. One is a massive volume of training data; the other is a "feedback loop" that informs decisions. Based on the product, a machine learning system will look at volumes of data to determine whether its decisions are correct.

"Large organizations are starting to experiment with artificial intelligence and machine learning," says Oltsik. "They don't have a deep understanding of the concepts and models, but nor do they want to," he says. "What they care about is how effective it is, and does it improve their existing technology and processes."

However, he continues, security leaders should know enough to determine where these technologies can be applied and how to choose one system over another.

Where They Fit in Your Security Strategy

"There are a couple of unique problem sets in security that are right for machine learning, and right for different kinds of solutions," explains Ryan LaSalle, global managing director for Growth & Strategy at Accenture Security. He describes security as a "graph problem" because it's a way of storing lots of data and everything is relationship-driven.

People have a hard time visualizing in a graph, he continues, but machines excel because they thrive on large volumes of data. The key is to pick up scenarios where the machine has an advantage over people; for example, observing human behaviors and detecting anomalies.

"User behavior analysis is a big one that went from traditional analytics to more machine learning-driven," he says.

Machines can view employee behaviors across multiple points in the environment and automate access privileges, something "most enterprises are terrible at," he adds. Business managers often "rubber-stamp" the process of employee access and won't consider the same level of detail when considering who should be able to access what.

In the near term, most security teams will use machine learning for detection and response, though there are protective applications as well, says Safran. In the future there will be applications on the strategic and architectural levels, but we're not there yet.

"For now I see most of the activity is going to be operationally focused and tactical in nature: detecting malware, phishing attacks, detecting unusual behavior that could be indicative of insider threats," she explains. When a system detects a threat that needs to be investigated, a machine can help by providing next steps for the response process.

Machines Won't Replace (Most of) Your Colleagues

There are several misconceptions about artificial intelligence and machine learning, says Oltsik. One of them is the idea that machines will eventually be a substitute for humans.

"Across the board, at this point we're very, very far from a situation where machines are going to do all the work and the security team can go home," Safran says. "All of the machine learning apps for the next few years will focus on enhancing the work of the security team and making their operations more efficient and effective."

However, machines can do the same work as tier one analysts, freeing up limited security talent to focus on more advanced work, she points out. Most tier-one tasks involve information gathering and technical duties. These are decisions that can be calculated. Security teams can leverage machine learning to automate "busywork" and train up their employees.

How to Shop Securely

"You need to look at the threat scenarios your business cares about," says LaSalle. Test the outcomes of the system and compare where you are today, with what you're trying to achieve.

Oltsik also points to performance as something to keep in mind. If an artificial intelligence tool collects data on-site and puts it in the cloud for processing, it will cause latency. What kind of impact will that have on your organization?

Data reporting is another factor to consider, he continues. "All this data interpretation and analysis is only as useful as it comes back and provides information to a human. In the history of technology there have been good reports and bad reports, good visualization and bad."

Safran recommends asking the vendor about the training data they use. If there is no training data, changes are the tool doesn't actually have machine learning capabilities. If there is training data, you need to know if it's specific to the business or their whole customer base.

You also need to understand how the model works, as well as the feedback loop informing it.

"It's a challenging question for many organizations, but having insight into how it works under the hood gives a better perspective on what it's capable of doing and how it could be a benefit," she explains.

Related Content:




Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance & Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Apprentice
2/18/2018 | 2:43:49 AM
AI in Security
For some additional insights into where AI and ML help in security, have a look at this other Darkreading post of mine: AI in Cyber Seucirty - Where we stnad and where we need to go
13 Russians Indicted for Massive Operation to Sway US Election
Kelly Sheridan, Associate Editor, Dark Reading,  2/16/2018
From DevOps to DevSecOps: Structuring Communication for Better Security
Robert Hawk, Privacy & Security Lead at xMatters,  2/15/2018
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Current Issue
How to Cope with the IT Security Skills Shortage
Most enterprises don't have all the in-house skills they need to meet the rising threat from online attackers. Here are some tips on ways to beat the shortage.
Flash Poll
[Strategic Security Report] Navigating the Threat Intelligence Maze
[Strategic Security Report] Navigating the Threat Intelligence Maze
Most enterprises are using threat intel services, but many are still figuring out how to use the data they're collecting. In this Dark Reading survey we give you a look at what they're doing today - and where they hope to go.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
Published: 2017-05-09
NScript in mpengine in Microsoft Malware Protection Engine with Engine Version before 1.1.13704.0, as used in Windows Defender and other products, allows remote attackers to execute arbitrary code or cause a denial of service (type confusion and application crash) via crafted JavaScript code within ...

Published: 2017-05-08
unixsocket.c in lxterminal through 0.3.0 insecurely uses /tmp for a socket file, allowing a local user to cause a denial of service (preventing terminal launch), or possibly have other impact (bypassing terminal access control).

Published: 2017-05-08
A privilege escalation vulnerability in Brocade Fibre Channel SAN products running Brocade Fabric OS (FOS) releases earlier than v7.4.1d and v8.0.1b could allow an authenticated attacker to elevate the privileges of user accounts accessing the system via command line interface. With affected version...

Published: 2017-05-08
Improper checks for unusual or exceptional conditions in Brocade NetIron 05.8.00 and later releases up to and including 06.1.00, when the Management Module is continuously scanned on port 22, may allow attackers to cause a denial of service (crash and reload) of the management module.

Published: 2017-05-08
Nextcloud Server before 11.0.3 is vulnerable to an inadequate escaping leading to a XSS vulnerability in the search module. To be exploitable a user has to write or paste malicious content into the search dialogue.