Security and development are still two different worlds, with open source developers resistant to spending time finding and fixing vulnerabilities.

4 Min Read

Coding new features, improving tools, and working on new ideas are the top 3 activities that motivate open-source developers to continue coding. At the bottom of the list? Security.

In a survey of 603 free and open source software (FOSS) contributors, the Linux Foundation's Open Source Security Foundation (OpenSSF) and the Laboratory for Innovation Science at Harvard University (LISH) discovered that the average FOSS developer only spent 2.3% of their time on improving the security of their code. While the contributors expressed the desire to spend significantly more time on their top 3 activities, they did not feel compelled to spend additional time on security, according to the 2020 FOSS Contributor Study released this week.

Developers' opinions of security and secure coding — calling it a "soul-withering chore" and an "insufferably boring procedural hinderance" —  highlight that companies who want to harden their applications against attacks have a significant gap between those desires and getting their own developers on board, says Frank Nagle, a Harvard Business School professor and contributing author to the report analyzing the survey results.

"It appears that this 'shifting left' has not fully pervaded the minds of FOSS developers," he says. "Although we did not specifically ask whether developers think security is important, they likely understand that is a concern, but believe others should deal with it."

Open source components and applications account for more than 70% of the code included in modern applications, making the security of those components of paramount concern. Yet, open source developers are more focused on working on the latest tools and implementing their own priorities, according to the 2020 FOSS Contributor Survey report.

The perception that open source components often have unresolved vulnerabilities has led to more companies implementing a variety of security checks and procedures, including more than half — 55% — requiring regular patches and updates, 49% permitting and blocking specific components, and 47% using a manual review process to allow specific components, according to the DevSecOps Practices and Open Source Management report published by software security firm Synopsys this week. 

Companies' approaches to open source software continues to be uneven, says Tim Mackey, principal security strategist for Synopsys. 

"One key takeaway from this report is that greater automation is required to inventory open source usage," he says. "From there, businesses need to develop and implement processes to benefit from all the innovation occurring within open source communities."

Companies are still figuring out how to integrate security into their DevOps pipelines, according to the Synopsys survey. While a third of companies consider their approach to DevSecOps to be mature, another 40% only have limited implementations or pilots, and the remaining 27% are still researching or not planning to follow DevSecOps.

Media coverage of specific open source vulnerabilities and the general issue of open source security has prompted many companies to put more stringent controls in place and migrate to better-maintained open source projects, according to Synopsys's report. 

However, media coverage of a particular threat is not a good indicator of how dangerous a vulnerability or flawed component may be, says Synopsys's Mackey.

"What we should recognize is that media coverage will cause non-technical people to start asking questions," he says. "Those non-technical people want to ensure that their business isn't in the news for a similar event and will start to ask questions about how open source security is managed within their organization. Having a well-defined process, one which is able to quickly identify the impact of a new vulnerability, goes a long way to calming concerns."

The FOSS Contributor Survey suggests that companies should start with a focus on secure code as one of the requirements of the business. Writing simpler, well-commented code, automating tests and security checks, and using memory-safe languages can minimize coding mistakes. 

"As we see an increasing number of companies actively paying their employees to work on FOSS projects, these employers should incentivize their employees to both write secure code from the beginning, and also spend some time helping find and address existing security vulnerabilities," Harvard University's Nagle says.

Companies that do not perform their due diligence could find that the open source building blocks of their applications introduced security vulnerabilities into their products. On Dec. 8, for example, network-security firm Forescout disclosed vulnerabilities in four different open source TCP network stacks installed on millions of connected devices and routers.

The Open Software Security Foundation recommended that organizations who pay employees to contribute to open source projects should also contribute to security audits and have those employees rewrite portions or components of those libraries. Part of such a rewrite could be to switch to a memory-safe language, the FOSS Contributor Survey report said.

About the Author(s)

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights