News, news analysis, and commentary on the latest trends in cybersecurity technology.

First Wave of Vulnerability-Fixing AIs Available for Developers

GitHub joins a handful of startups and established firms in the market, but all the products are essentially "caveat developer" — let the developer beware.

5 Min Read
person software development
Source: Wright Studio via Shutterstock

GitHub has joined a growing list of companies offering AI-powered bug-fixing tools for software developers with its new code scanning autofix feature.

Developers who sign up for the beta program as part of GitHub's Advanced Security can scan their code with CodeQL, the company's static-analysis scanner, and fixes will be suggested for the most critical vulnerabilities. The feature will automatically find and fix issues, offering "precise, actionable suggestions," for any pull request and should reduce developers' time to remediate vulnerabilities, says Justin Hutchings, senior director of product management at GitHub.

"We have optimized the set of queries that we provide to developers by default with code scanning to those alerts that we think are the highest precision and the highest severity," Hutchings says. "So we're only interrupting developers, in those cases, when we think we have very high confidence reasons to believe that this is a problem that they should deal with."

With code scanning autofix, GitHub joins other application-security firms in turning to AI platforms to fix vulnerabilities. Established player Veracode launched its platform, Veracode Fix, in June as a way of helping developers address the massive delay in fixing vulnerabilities — about 75% of vulnerabilities are typically left unfixed for more than month, the company says.

Startup companies have also taken advantage of the excitement around generative AI and ChatGPT to launch their own bug-fixing services. In August, Mobb's AI-powered solution for triaging vulnerability reports and providing fixes won the Black Hat Startup Spotlight competition. The same month, startup Vicarius announced vuln_GPT, a generative AI service that will find and fix vulnerabilities and misconfigurations using data from a remediation database run by the firm.

The tools aim to fix the vast security debt that developers and application-security professionals face everyday, says Michael Assraf, CEO and co-founder of Vicarius.

"Vulnerability remediation is broken, for many reasons — consolidation, personalization, and scalable remediation are definitely some of the top challenges," he says. "We’ve taken many steps forward, but there’s a still long way to go as organizations can’t or don’t have the capacity to deploy required changes even when they know they need to."

More Security in the Workflow

Automation through various generative AI capabilities will quickly become part of how developers work, because the techniques make workers more efficient. Developers can turn the work of triaging and fixing vulnerabilities, which can take an average of 5 hours in enterprises, into minutes through the use of AI, says Eitan Worcel, CEO and co-founder at Mobb.

"Automated fixes are coming, whether it's AI or not," he says. "The good part of that is the No. 1 thing that developers should do is increase their testing coverage, and this allows them to do that."

Overall, developers are 15% to 30% more productive in writing and fixing code, according to an initial survey by Forrester Research.

"Certainly, I think the productivity gains are there," says Janet Worthington, a senior security analyst with Forrester. "I think those all help you to your point to save time, but you still need to make sure that you're checking. So there still needs to be a developer in the loop."

Developers should expect to see more AI capabilities integrated into how they work, including embedding security in the IDE, adding AI checks of pull requests, and generally reducing the friction that developers encounter when they triage and fix vulnerabilities, says GitHub's Hutchings.

"We've tried to take kind of unique approach in terms of bringing security capabilities to developers where they work," he says.

Don't Trust, Certainly Verify

While the promise of AI improving cybersecurity functions is readily apparent, whether current AI systems are up to the task remains to be seen.

On the positive side, researchers presented evidence during last year's Black Hat conference that GPT-3-based models could help incident responders sift through massive amounts of data to find security-specific information, allowing natural-language threat hunting and better classification of web sites. And the Defense Research Projects Research Agency (DARPA) launched in August a two-year competition aimed at using AI to improve software.

GitHub has certainly seen its efforts take off. In 2022, more than a third of the code (35%) checked in by developers using its service were suggested by the company's AI assistant, Copilot. This year, developers are on track to increase that share to 60%, and the company expects it to grow to 80% in five year, GitHub's Hutchings says.

"Not only are developers completing tasks faster — nearly 90% report [that they do] — but what’s even more powerful is it helps them stay in the flow, focus on more satisfying work, and conserve mental energy," he says.

Yet, generative AI systems that make connections between unrelated information — often referred to as "hallucinations" — remain a danger and could result in bad suggestions for code fixes. A third of developers (32%) have concerns over AI used in development and more than half of corporate boards (59%) worry about AI's use in their business, according to separate surveys.

AI, Everywhere

There is a sense that AI will eventually become part of every developer's experience — it's not a matter of if, but when. Channelling his inner Marc Andreessen, Vicarius' Assraf says, "AI will eat the world, and more particular and relevant to us, AI will eat the security world."

The founder's vision goes beyond just suggesting coding patterns to developers to eliminate vulnerabilities — he wants to make possible AI agents that can autonomously fix software.

"The ultimate goal is to build a worm-like crawler that will jump around the infrastructure and remediate threats completely independently, with no human intervention, or minimal validation," Assraf says. "That will increase cyber hygiene in a scalable and efficient way, which doesn’t necessarily require an expensive set of products or strong security personnel."

About the Author(s)

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights