Fuzzing Services Help Push Technology into DevOps Pipeline
As part of a continuous testing approach, fuzzing has evolved to provide in-depth code checks for unknown vulnerabilities before deployment.
August 19, 2020
As companies have shifted security left, putting more security checks into the development pipeline, fuzz testing, or "fuzzing," has largely continued to remain outside the main software development lifecycle.
This year, that seems to have changed. DevOps lifecycle firm GitLab announced in June that the company had acquired two organizations, Peach Teach and Fuzzit, to bolster its own capabilities by providing continuous and periodic protocol fuzzing. Last week, Internet infrastructure firm Cloudflare announced it would use automated cybersecurity startup ForAllSecure's Mayhem fuzzer as part of its software development lifecycle.
Done right, fuzzing fits well into the DevOps model, also known as the continuous integration continuous deployment (CICD) model, says Jeff Whalen, vice president of product at ForAllSecure.
"Fuzzing by its very nature is this idea of automated continuous testing," he says. "There is not a lot of human input that is necessary to gain the benefits of fuzz testing in your environment. It's a good fit from the idea of automation and continuous testing, along with this idea of continuous development."
Many companies are aiming to create agile software development processes, such as DevOps. Because this change often takes many iterative cycles, advanced testing methods are not usually given high priority. Fuzz testing, the automated process of submitting randomized or crafted inputs into the application, is one of these more complex techniques. Even within the pantheon of security technologies, fuzzing is often among the last adopted.
Yet, 2020 may be the year that changes. Major providers and even frameworks have focused on making fuzzing easier, says David Haynes, a product security engineer at Cloudflare.
"I think we are just getting started in terms of seeing fuzzing becoming a bit more mainstream, because the biggest factor hindering (its adoption) was available tooling," he says. "People accept that integration testing is needed, unit testing is needed, end-to-end testing is needed, and now, that fuzz testing is needed."
Adding more testing to DevOps is not always easy, especially when that testing could block development. While CICD cycles are a high priority for many companies, software developers continue to have trouble attaining a high maturity level. In 2019, 43% of software development professionals rated their development as either high or elite performers in DevOps, down from 48% in 2018, according to the DevOps Research and Assessment (DORA) group at Google. High performers deploy code for an application at least once per week, require less than a week lead time for changes, and can restore services within a day following an incident.
"Continuous delivery is not easy to do," says Christopher Condo, a principal analyst in the application development and delivery group at Forrester Research. "Continuous delivery requires automation at every level, but not just automation of software delivery, function testing, and infrastructure and release management, but also automation of compliance and automation of governance. ... Companies that are very risk-averse and want to have a lot of levels of tests, that makes it even harder."
Adding fuzzing to companies struggling to refine their DevOps implementations is a challenge. Basic fuzzing technology, in which the program finds all the application inputs and then randomly sends data to those inputs, leads to a very large — often infinite — set of possible inputs. A critical key in fuzzing is limiting the number of inputs, or the construction of inputs, used to test an application, which reduces the complexity of the problem, says ForAllSecure's Whalen.
"There is this huge space of negative testing. What happens when an applications sees things that it is not supposed to see, or it is not expected to see?" he says. He stresses the necessity of getting it right, however. "When you move into cloud-native architecture, you are not necessarily going to have a lot of control over what [sorts of attacks] will be coming at the application. And that is where fuzzing does really, really well."
Google agrees. Last year, the company released its own fuzzer, Cluster Fuzz, as an open source project. Like many fuzzers, the program aims to find memory corruption flaws in software that could be exploited by attackers. Google ran the system on a massive workload cluster of 25,000 cores and offered it as a free service for open source projects. The company credited it with finding more than 16,000 bugs in the Chrome browser and more than 11,000 bugs in a variety of open source programs.
"For software projects written in an unsafe language such as C or C++, fuzzing is a crucial part of ensuring their security and stability," Google stated in its blog post. "In order for fuzzing to be truly effective, it must be continuous, done at scale, and integrated into the development process of a software project."
To make incorporating fuzz testing into DevOps pipelines easier, companies should use smaller test sets. Software security firm Synopsys, for example, points out that companies could run 9 million test cases on the full settings for its fuzzing platform, Defensics, or 700,000 under its default settings. Tailoring the test sample to thousands of cases can mean that developers run a fuzz test within the DevOps cycle and alert developers to any issues.
In the end, fuzzing remains complex.
ForAllSecure's automated fuzzing machine, Mayhem, for example, came out of a research project at Carnegie Mellon University and made a name for itself in 2016 when it won the Cyber Grand Challenge event, an all-machine hacking tournament. However, a subsequent invitation to the annual DEF CON Capture the Flag tournament demonstrated that even the top system in AI fuzzing still has a long road ahead in the cybersecurity world, when the system placed dead last in the rankings out of 15 teams.
Arguably, such AI systems, which have toppled the top games players in chess, Jeopardy, poker, and Go, will eventually solve this problem as well. Both defensive and offensive systems benefit from automation, says Alex Rebert, co-founder of ForAllSecure.
"We're already seeing an increasing reliance on automation tools, both in the defensive and offensive side," he says. "On the offensive side, everyone I know relies heavily on tools like fuzzing to find exploitable bugs."
As developers' tools and services incorporate more easy-to-use fuzzing, software security will benefit. Building experience into the services will help developers know when to use fuzzing and when to rely on other technologies, says David DeSanto, director of secure and defend at GitLab.
"In my career, I've broken a lot of people's stuff, and a lot of the times it is because I led with fuzz testing first," he says. "Fuzzing used to be a dark art that only the security researchers and hackers knew how to use. The goal is to make it approachable and easy to understand."
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024