Developers need more software security safeguards earlier in the process, especially as AI becomes more common.

4 Min Read
Developer at computer, sinking his head in his hands
Source: Wutthichai Luemuang via Alamy Stock Photo

Developers are increasingly adopting security testing as part of the development pipeline, but companies still have room for improvement, with a minority of companies testing software during development or prior to committing code.

While two-thirds of companies have security tools incorporated into the systems that build software, only 40% of firms have deployed security checks into the integrated development environment (IDE) and 48% as part of the code committing stage, according to Snyk's annual 2023 State of Software Supply Chain Security report. The report also found that 40% of companies do not use any supply chain technologies, such as a static analysis security tool (SAST) or a software composition analysis (SCA) tool.

Every developer should be conducting at least three types of scans: scanning custom code with SAST, checking open source dependencies with an SCA tool, and analyzing infrastructure files to detect insecure configuration, says Randall Degges, head of developer relations at Snyk.

"If you're paying really close attention and getting on top of your actual software development lifecycle as a developer ... you're going to be better than 98% of places out there that aren't doing those things," he says. "Those are the things that you have direct control over and can instantly make a difference."

The good news is that more companies are paying attention to software security, especially after the widespread vulnerabilities in the Log4j library impacted numerous firms. In the 18 months since the attackers released Log4Shell and other exploits, the vast majority of companies (94%) have made significant changes to their approach to application security, according to the Snyk report. Nearly two-thirds of companies increased the frequency of scanning, while more than half adopted new tools (59%) or put developers through additional security training (53%), according to survey respondents.

Degges compares the impact of the vulnerability and organizations' scramble to plug the security holes to Edward Snowden's release of US classified documents.

"I can't think of another time in my career where a single thing has driven so much security-focused behavior," he says. "This is the biggest real driver of software security that I've seen in my lifetime."

Pros and Cons of AI Tools

Developers are also using AI assistants to speed their code production and expect to continue to use AI in development in the future, despite misgivings about the security of the code it produces. More than three-quarters of developers (77%) believe that they produce better and more secure code using AI tools, but 59% still have concerns over potential vulnerabilities in their code, the report stated.

The result is that developers will likely build code faster, but those who trust the AI tools too much will likely find their code is less secure, says Degges.

"There's going to be a huge learning curve for developers as they start to adopt this stuff and realize that you really have to know what you're doing," he says. "You have to assume that all generated code is unsafe by default."

ChatGPT, for example, is now used in nearly 1,000 packages from the Python Package Index (PyPI) and the Node Package Manager (npm) packages, with 75% of those components being brand new, Endor Labs stated in research released earlier this month.

"The fact that there's been such a rapid expansion of new technologies related to artificial intelligence, and that these capabilities are being integrated into so many other applications, is truly remarkable. But it's equally important to monitor the risks they bring with them," Henrik Plate, lead security researcher at Endor Labs, said in a statement. "These advances can cause considerable harm if the packages selected introduce malware and other risks to the software supply chain."

Open Source Leads in Some Security Metrics

For the first time, developers are also fixing vulnerabilities in open source software faster than in custom components, according to Snyk's report. While the report only discussed the trends in relative terms, the time to fix (TTF) for proprietary software increased slightly in 2022, while the TTF for open source software continued to fall.

"This implies that the open source ecosystem is improving security response over time and trending towards providing better security than the closed source world," Snyk stated in the report.

In fact, the time to fix for critical- and high-severity open source vulnerabilities fell by about half in 2022, the third year that the TTF has declined.

"There's a lot of things driving that, especially just overall, like the awareness amongst open source maintainers of security issues, particularly supply chain security issues, which really ... impact maintainers," Snyk's Degges says. "Overall, we've seen a really huge improvement in the open source community in the last year."

About the Author(s)

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights