Sponsored By

News, news analysis, and commentary on the latest trends in cybersecurity technology.

Google Open Sources AI-Boosted Fuzzing Framework

The fuzzing framework uses AI to boost code coverage and speed up vulnerability discovery.

Dark Reading Staff

February 5, 2024

1 Min Read
A concept art with fuzzy edges
Source: dubassy via Alamy

Google has released its fuzzing framework as an open source resource to help developers and researchers improve how they find software vulnerabilities. The framework, which automates manual aspects of fuzz testing, uses large language models (LLMs) to write project-specific code to boost code coverage. The open source fuzzing tool includes support for Vertex AI code-bison, Vertex AI code-bison-32k, Gemini Pro, Open AI-3.5-turbo, and OpenAI GPT-4.

The LLM is used to evaluate generated fuzz targets against up-to-date data from the production environment across four metrics: compilability, runtime crashes, runtime coverage, and runtime line coverage.

"Overall, this framework manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets," Google notes.

Google has already used fuzzing in combination with LLMs on more than 300 C and C++ projects and to expand code coverage to potentially find more vulnerabilities. This technique also helped discover two vulnerabilities in cJSON (a parser to read JSON arrays written in C) and libplist (a C++ library for handling Apple Property List format in binary or XML).

"Without the completely LLM-generated code, these two vulnerabilities could have remained undiscovered and unfixed indefinitely," according to a post on the Google Security Blog by Google Open Source Security team members Dongge Liu and Oliver Chang and Machine Learning for Security team members Jan Nowakowski and Jan Keller.

It's not just enough to use fuzzing to find vulnerabilities. Google is working on methods to prompt LLMs to generate code fixes, test them, and select which is the best option to install.

"This AI-powered patching approach resolved 15% of the targeted bugs, leading to significant time savings for engineers," the team wrote.

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights