Operations

1/31/2018
10:30 AM
Anup Ghosh
Anup Ghosh
Commentary
Connect Directly
Twitter
RSS
E-Mail vvv
50%
50%

5 Questions to Ask about Machine Learning

Marketing hyperbole often exceeds reality. Here are questions you should ask before buying.

How tired are we of "artificial intelligence" and "machine learning" being sprinkled like pixie dust on every product being hawked by vendors? The challenge for cybersecurity professionals is to see through the fog and figure out what's real and what's just marketing hyperbole.

Often, marketing hyperbole exceeds the reality. Notoriously, Tesla's Autopilot sensors can be fooled in certain edge conditions, iPhone X can be fooled to unlock a phone by a doppelganger, and Apple's Siri isn't very good at taking directions. Even the winning team in the DARPA Cyber Grand Challenge lost spectacularly to actual hackers at the DEFCON conference following its win against other machines at Black Hat.

Machine learning is built on recursive algorithms and mathematics, making the concept itself difficult for many to comprehend. So how can buyers and practitioners decipher what's "real" machine learning technology from marketing spin and, just as importantly, what is effective versus what is not?

The five questions below go to the heart of how well a particular machine learning approach performs in detecting attacks, regardless of which particular algorithm it uses.

1. That detection rate you quote in your marketing materials is impressive, but what's the corresponding false-positive rate?

The false-positive rate is the flip side of detection rates. False positives and true detection rates go hand in hand. In fact, a system can be tuned to optimize false positives or true detections to acceptable levels. The receiver operating characteristic (ROC) is a curve that shows the relation between true detections versus false positives. Pick a false-positive rate on the curve and you'll see the corresponding true detection rate of the algorithm. If a vendor can't or won't show you a ROC curve for its system, you can bet it hasn't done proper machine learning research, or the results are not something it would brag about.

2. How often does your model need updating, and how much does your model's accuracy drop off between updates?

Just as important as detection and false-positive rates is the ability of the model to age well. Machine learning models will age with time as the training data it trained on becomes obsolete. The ability of a machine learning model to generalize from what it has trained on can be measured by its decay rate, the rate at which the model’s performance declines with time as the data it trained on ages. A good machine learning model will age slowly, which in practice means it will not need to be replaced that often. For comparison, traditional signature-based models need updating daily. A good machine learning model only needs to be replaced once every few months rather than every few days. The decay rate is heavily influenced by the training data. A diverse training set leads to a stable model, and a narrow training set ages out very fast.

3. Does your machine learning algorithm make decisions in real time?

Depending on your application, you can use machine learning for retrospective forensic analysis or for inline blocking — that is, blocking attacks as they occur in real time. If used for inline blocking, the approach needs to operate in real time, typically measured in milliseconds. In general, this rules out online lookups because of round-trip times from the cloud. Real-time performance requires a compact model able to run on-premises in the device's memory. Asking the real-time performance of the model is one way of figuring out whether the model is compact enough to block attacks in real time. 

4. What is your training set?

The most overlooked important attribute in machine learning is the training set. The performance of a machine learning algorithm depends on the quality of the training set. Good, curated training sets that are robust to change, reflect real-world conditions, and diverse are hard to acquire, but they are incredibly important for effective performance. If the data on which the model is trained is not representative of the threats you will face, then the performance on your network will suffer regardless of how the model was tested. Models tested on narrow data sets will have misleading performance results.

5. How well does your machine learning system scale?

The good and bad news for machine learning in security is that there is a massive amount of data on which to train. Machine learning algorithms typically require those massive amounts of data to properly learn the phenomena it is trying to detect. That's the good news. The bad news is the models must be able to scale to Internet-sized databases that change continuously. Understanding how much data an algorithm is trained on gives an indication of its scalability. Understanding the footprint of the model gives an indication of its ability to compactly represent and process Internet-scale databases.

As you can see, for a machine learning approach to be successful, it must do the following:

  • Have high detection rates and low false positives on known and unknown attacks, with a published ROC curve.
  • Be trained on a robust training set that is representative of real-world threats.
  • Continue to deliver high performance for months after each update.
  • Provide real-time performance (threat blocking) without consuming large amounts of system resources such as memory and disk.
  • Scale reliably, without using more memory or losing performance, even as the training set increases.

Next time you talk to a company that claims to use machine learning in its products, be sure to get answers to these questions.

Related Content:

Anup Ghosh is Chief Strategist, Next-Gen Endpoint, at Sophos. Ghosh was previously Founder and CEO at Invincea until Invincea was acquired by Sophos in March 2017. Prior to founding Invincea, he was a Program Manager at the Defense Advanced Research Projects Agency (DARPA). ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
vietnamvisaservice
50%
50%
vietnamvisaservice,
User Rank: Apprentice
1/31/2018 | 11:44:41 PM
Thank you
That sound good
Google Engineering Lead on Lessons Learned From Chrome's HTTPS Push
Kelly Sheridan, Staff Editor, Dark Reading,  8/8/2018
Election Websites, Backend Systems Most at Risk of Cyberattack in Midterms
Kelly Jackson Higgins, Executive Editor at Dark Reading,  8/14/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Flash Poll
The State of IT and Cybersecurity
The State of IT and Cybersecurity
IT and security are often viewed as different disciplines - and different departments. Find out what our survey data revealed, read the report today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-8405
PUBLISHED: 2018-08-15
An elevation of privilege vulnerability exists when the DirectX Graphics Kernel (DXGKRNL) driver improperly handles objects in memory, aka "DirectX Graphics Kernel Elevation of Privilege Vulnerability." This affects Windows Server 2012 R2, Windows RT 8.1, Windows Server 2016, Windows 8.1, ...
CVE-2018-8406
PUBLISHED: 2018-08-15
An elevation of privilege vulnerability exists when the DirectX Graphics Kernel (DXGKRNL) driver improperly handles objects in memory, aka "DirectX Graphics Kernel Elevation of Privilege Vulnerability." This affects Windows Server 2016, Windows 10, Windows 10 Servers. This CVE ID is unique...
CVE-2018-8412
PUBLISHED: 2018-08-15
An elevation of privilege vulnerability exists when the Microsoft AutoUpdate (MAU) application for Mac improperly validates updates before executing them, aka "Microsoft (MAU) Office Elevation of Privilege Vulnerability." This affects Microsoft Office.
CVE-2018-8414
PUBLISHED: 2018-08-15
A remote code execution vulnerability exists when the Windows Shell does not properly validate file paths, aka "Windows Shell Remote Code Execution Vulnerability." This affects Windows 10 Servers, Windows 10.
CVE-2018-8398
PUBLISHED: 2018-08-15
An information disclosure vulnerability exists when the Windows GDI component improperly discloses the contents of its memory, aka "Windows GDI Information Disclosure Vulnerability." This affects Windows 7, Windows Server 2012 R2, Windows RT 8.1, Windows Server 2008, Windows Server 2012, W...