Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

11/15/2017
07:30 AM
Connect Directly
Twitter
LinkedIn
RSS
E-Mail
50%
50%

Microsoft Uses Neural Networks to Make Fuzz Tests Smarter

Neural fuzzing can help uncover bugs in software better than traditional tools, company says.

Microsoft has developed a new technique to test software for security flaws that uses deep neural networks and machine learning techniques to improve upon current testing approaches.

Early experiments with the new "neural fuzzing" method show that it can help organizations uncover bugs in software better than traditional fuzzing tests, according to Microsoft.

Fuzzing is an approach where security testers try to uncover commonly exploitable flaws in applications by targeting the software with deliberately malformed data—or inputs—to see if it will crash so they can investigate and fix the cause.

The effectiveness of these tests depends to a large extent on how well the malicious inputs are crafted. Generally, the more input you find to trigger a crash, the more application vulnerabilities you are likely going to be able to identify and close.

Security testers can use a few different methods to generate the malicious or malformed input.

Microsoft's approach improves on one method that uses data from previous fuzz tests as feedback for creating new mutations or tests. The approach involves a "learning technique that uses neural networks to learn patterns in the input files from past fuzzing explorations to guide future fuzzing explorations," according to Microsoft.

"The neural models learn a function to predict good (and bad) locations in input files to perform fuzzing mutations based on the past mutations and corresponding code coverage information," the company said.

Neural networks are computing systems loosely modeled on the human brain that can autonomously learn from observed data. Many modern applications such as face and voice recognition and weather prediction use such networks.

Microsoft has implemented the new technique in American Fuzzy Lop (AFL), an open-source fuzzer that uses observed behavior from previous fuzzing executions to guide future tests. Researchers from the company tested the method on four target programs using parsers for the ELF, XML, PNG and PDF file formats.

The results were "very encouraging," Microsoft Development Lead William Blum said in a blog Monday. "We saw significant improvements over traditional AFL in terms of code coverage, unique code paths and crashes for the four input formats, " he said.

For instance, for the ELF parser, the neural AFL reported more than 20 crashes compared to zero with the traditional AFL fuzzer. Similarly, the neural AFL found 38% more crashes than traditional AFL for text-based file formats such as XML.

"This is astonishing given that neural AFL was trained on AFL itself," Blum said. The only area where the neural AFL did not perform as well was with PDF format, likely because of the large size of the files.

"AFL is essentially a mutational fuzzer that uses feedback from the target software to determine how the test cases are mutated," says Jonathan Knudsen, Security Strategist at Synopsys.  "Microsoft has modified the feedback mechanism with a neural network."

The challenge with any fuzzing lies in finding the finite set of malformed inputs that are most likely to trigger bugs, Knudsen says. "Microsoft has modified how AFL chooses its inputs in a way that gave them better code coverage and more bugs for a couple of applications."

Moreno Carullo, co-founder and chief technical officer of Nozomi Networks says Microsoft has broken new ground with this technique. "It is absolutely a new [and] innovative approach," Carullo says.

"Neural fuzzing increases the speed of fuzzing as a test method and also finds more issues that cause a crash," he notes. The automation behind the approach can drastically reduce the time organizations require to test their software for security vulnerabilities. "Without this, it would take a team of heuristics experts a lot more time to discover the issues."

According to Blum, the capability that Microsoft has demonstrated only scratches the surface of what can be achieved using neural networks in fuzzing. "Right now, our model only learns fuzzing locations, but we could also use it to learn other fuzzing parameters such as the type of mutation or strategy to apply," he said.

Related content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Florida Town Pays $600K to Ransomware Operators
Curtis Franklin Jr., Senior Editor at Dark Reading,  6/20/2019
Pledges to Not Pay Ransomware Hit Reality
Robert Lemos, Contributing Writer,  6/21/2019
AWS CISO Talks Risk Reduction, Development, Recruitment
Kelly Sheridan, Staff Editor, Dark Reading,  6/25/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Building and Managing an IT Security Operations Program
As cyber threats grow, many organizations are building security operations centers (SOCs) to improve their defenses. In this Tech Digest you will learn tips on how to get the most out of a SOC in your organization - and what to do if you can't afford to build one.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-10133
PUBLISHED: 2019-06-26
A flaw was found in Moodle before 3.7, 3.6.4, 3.5.6, 3.4.9 and 3.1.18. The form to upload cohorts contained a redirect field, which was not restricted to internal URLs.
CVE-2019-10134
PUBLISHED: 2019-06-26
A flaw was found in Moodle before 3.7, 3.6.4, 3.5.6, 3.4.9 and 3.1.18. The size of users' private file uploads via email were not correctly checked, so their quota allowance could be exceeded.
CVE-2019-10154
PUBLISHED: 2019-06-26
A flaw was found in Moodle before versions 3.7, 3.6.4. A web service fetching messages was not restricted to the current user's conversations.
CVE-2019-9039
PUBLISHED: 2019-06-26
The Couchbase Sync Gateway 2.1.2 in combination with a Couchbase Server is affected by a previously undisclosed N1QL-injection vulnerability in the REST API. An attacker with access to the public REST API can insert additional N1QL statements through the parameters ?startkey? and ?endkey? of the ?_a...
CVE-2018-20846
PUBLISHED: 2019-06-26
Out-of-bounds accesses in the functions pi_next_lrcp, pi_next_rlcp, pi_next_rpcl, pi_next_pcrl, pi_next_rpcl, and pi_next_cprl in openmj2/pi.c in OpenJPEG through 2.3.0 allow remote attackers to cause a denial of service (application crash).