Lack of root cause analysis following vulnerability testing keeps app sec teams treating symptoms rather than the disease of insecure coding

Though many enterprises invest in security testing ranging from automated vulnerability scans to full-out penetration testing, in rare instances do organizations do root cause analysis on the results and feed that information back into the application development lifecycle. Many experts within the security community say that lack of root cause analysis is keeping application security stuck in a rut.

"In most cases, organizations focus on solving the symptom and very rarely focus on addressing the underlying causes of their vulnerabilities," says Ryan Poppa, senior product manager for Rapid7. "Vulnerabilities are an unfortunate part of the development cycle, but it should be taken by organizations as an opportunity to learn and to move the ball forward, instead of just being seen as a cost to the business that needs to be minimized."

According to Caitlin Johanson, security architect at Veracode, all too often organizations continually let history repeat itself rather than learning from ghosts of application security past. This is why organizations continue to see the same types of vulnerabilities crop up over and over again in their code, even within newly developed applications.

"Your best practices should always be based on your worst failures. If you have reported incidents and keep tripping over the things you swept under the rug, it’s time to face the music and develop applications the right way," Johanson says.

[Are you missing the downsides of big data security analysis? See 3 Inconvenient Truths About Big Data In Security Analysis.]

According to Ed Adams, CEO of Security Innovation, while automated scanners have made it easy for a fairly low-level technician to identify potential problems in software, an organization needs someone who has an understanding of how the system functions and access to the source code to fix the problem.

"Here's the rub: recent research has shown that developers lack the standards and education needed to write secure code and fix vulnerabilities identified via security assessments," he says, recent research his firm commissioned from the Ponemon Institute.

Further exacerbating the problem is the issue of false positives, which "drives developers nuts" as they chase down non-existent problems at the expense of building out future functionality in other projects. Meanwhile, the remediation guidance provided by most tools is watered down to the point of ineffectiveness.

"[The tools are] often just regurgitating stock CWE content which has no context for a developer and is agnostic to the language and platform in which the developer is coding," he says. "This means it's mostly useless and tools often become shelfware."

More problematic, though, is that the skillsets and ethos held by coders and those held by vulnerability scanning or penetration testing experts come from two different worlds.

"Programming and penetration testing are fields that require constant learning and unfortunately the two fields don't always line up," says Joshua Crumbaugh, founder of Nagasec, a security research and penetration testing firm. "Programmers want to write more secure code and penetration testers want to help protect the code. But most never have time to take away from their studies to cross train in the other field and though most penetration testers know some coding they tend to use very different styles, functions and even languages."

Long term, bridging that gap will require educational institutions to put a greater emphasis on information security across all IT and programming-based degrees, says Crumbaugh, who believes that some companies could even go so far as to encourage IT staffers and programmers to take penetration testing courses to understand the security mindset.

But that could take years. What can organizations do to start employing root cause analysis in the here and now? Right off the bat, Adams says organizations need to start with context.

"To leverage this vulnerability information to the fullest, you've got to make it contextual for the developer," he says. " Start by asking your developers if they understand enough about the vulnerability reported to fix the problem and code securely per your organizations policies and standards. Chances are the answer will be no."

To truly offer useful context, developers need to know at least five important things, Adams says. First, what the possible counter measures are to take against a particular vulnerability. Next, which counter measure is most appropriate for the application or is preferred by the organization. Then help the programmer understand whether the framework they're using has built-in defenses available. And finally, they should be shown how to implement counter-measures correctly, and how to code that counter-measure in the appropriate development language.

Additionally, organizations should start considering rethinking their definitions of terms as fundamental as "Quality Assurance" and "bugs."

"Organizations need to stop being afraid to redefine what ‘Quality Assurance’ means, because it sure doesn’t mean just functional code anymore," Johanson says. "Quality applications should function, but function securely."

Similarly, organizations tend to have an easier time fixing the root cause of security problems if they stop calling them vulnerabilities and start lumping them in with all the other bugs they fix.

"That’s how developers speak and they know how to triage, log and prioritize 'bugs,'" Adams says. "It sounds silly, but for whatever reason, when organizations treat security bugs differently, the process around remediation and triaging seems to stumble."

Organizations should also consider redesigning the way security teams interact with dev teams.

"Stepping up their game means bringing development and security together. Force them to be friends--well, not force--but they need each other," Johanson says. "The applications will only be as intelligent as the folks behind them."

Adams says that it helps if you visualize a pyramid with just a very small team of security professionals at the top and at the bottom an army of developers.

"High performance organizations nominate a security champion on each development team to interface with the security team. This is the middle layer of the pyramid," he says. "The important point is that the security champion is part of the development organization. That way you've got software engineers interacting with the army of developers."

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

About the Author(s)

Ericka Chickowski, Contributing Writer

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights