Developers Skip Third Party Code Checks
Businesses routinely assess their own software for security and quality, but many fail to test code from external vendors that goes into their products, reports Forrester.
When it comes to testing under-development software for bugs and potential security flaws, many businesses today will assess code developed in-house. But almost half fail to require similar checks for any third-party code that also goes into their products.
That finding comes from a new Forrester survey of 336 "software development influencers" in Britain, Canada, France, Germany, and the United States, commissioned by software quality and testing tool vendor Coverity, and conducted in late 2010. The majority of firms surveyed are producing Web-based applications (61%), followed by consumer software (55%), business-to-business enterprise software (49%), embedded software (47%), and cloud-based applications (45%).
Interestingly, more than 90% of businesses that develop software now rely on code provided by globally distributed development teams, as well as third-party software vendors, outsourced developers, and open source providers. But while 69% of organizations use code-testing tools during development, only 44% require that their software suppliers also do so. Furthermore, while 70% run security and vulnerability assessments of code developed in-house, only 35% require that third-party code providers do the same.
"The rise in use of third-party software is not just a trend, it is now the norm," said Jennifer Johnson, director of product marketing for Coverity, in a telephone interview. "Time to market, cost pressure, and the need to be more competitive is driving this."
Some industries--especially the government and healthcare sectors--use relatively little outsourced code. But other sectors, and in particular the mobile industry, rely on it heavily. Overall, 27% of surveyed firms have more than 10 suppliers, and 40% have more than five suppliers.
What happens when buggy code reaches consumers? According to 65% of surveyed businesses, customer satisfaction suffers. In addition, during the software development lifecycle, any defects discovered--including uptime issues and security vulnerabilities--slow the delivery of products to consumers. Indeed, 47% of businesses also said that undiagnosed software defects actually decrease their time to market.
Time-to-market pressures are often cited as general reasons why bugs in code don't get caught sooner in the software development lifecycle, or before applications get shipped to customers. "This study tells a slightly different story: More organizations treat quality and security as priority concerns," according to Forrester. In particular, "more respondents chose quality metrics over time-to-market to measure the success of developers and development projects."
Furthermore, Forrester found that the metrics used by businesses to measure development project success include the satisfaction of internal and external customers (for 74%), the number of escalations due to software defects (50%), reducing quantities of defects from the previous version (49%), time to market (46%), support calls due to unexpected behavior (46%), and uptime (32%).
The emphasis on customer satisfaction is good news. But the study also found that businesses might not be using the best combination of code-review techniques and tools. Notably, the top-three techniques used to analyze and test code are unit testing (for 36%), automated functional and performance testing (for 21%), manual code review (14%), automated code testing with static analysis (10%), and automated security testing (9%).
But the number-one technique--unit testing--only sees a small part of the overall application. Accordingly, said the report, "it is not surprising that many defects are discovered late in the development cycle," at which point they cost more to fix.
About the Author
You May Also Like