Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


05:00 PM
Connect Directly

Getting To The Root Of Application Security Problems

Lack of root cause analysis following vulnerability testing keeps app sec teams treating symptoms rather than the disease of insecure coding

Though many enterprises invest in security testing ranging from automated vulnerability scans to full-out penetration testing, in rare instances do organizations do root cause analysis on the results and feed that information back into the application development lifecycle. Many experts within the security community say that lack of root cause analysis is keeping application security stuck in a rut.

"In most cases, organizations focus on solving the symptom and very rarely focus on addressing the underlying causes of their vulnerabilities," says Ryan Poppa, senior product manager for Rapid7. "Vulnerabilities are an unfortunate part of the development cycle, but it should be taken by organizations as an opportunity to learn and to move the ball forward, instead of just being seen as a cost to the business that needs to be minimized."

According to Caitlin Johanson, security architect at Veracode, all too often organizations continually let history repeat itself rather than learning from ghosts of application security past. This is why organizations continue to see the same types of vulnerabilities crop up over and over again in their code, even within newly developed applications.

"Your best practices should always be based on your worst failures. If you have reported incidents and keep tripping over the things you swept under the rug, it’s time to face the music and develop applications the right way," Johanson says.

[Are you missing the downsides of big data security analysis? See 3 Inconvenient Truths About Big Data In Security Analysis.]

According to Ed Adams, CEO of Security Innovation, while automated scanners have made it easy for a fairly low-level technician to identify potential problems in software, an organization needs someone who has an understanding of how the system functions and access to the source code to fix the problem.

"Here's the rub: recent research has shown that developers lack the standards and education needed to write secure code and fix vulnerabilities identified via security assessments," he says, recent research his firm commissioned from the Ponemon Institute.

Further exacerbating the problem is the issue of false positives, which "drives developers nuts" as they chase down non-existent problems at the expense of building out future functionality in other projects. Meanwhile, the remediation guidance provided by most tools is watered down to the point of ineffectiveness.

"[The tools are] often just regurgitating stock CWE content which has no context for a developer and is agnostic to the language and platform in which the developer is coding," he says. "This means it's mostly useless and tools often become shelfware."

More problematic, though, is that the skillsets and ethos held by coders and those held by vulnerability scanning or penetration testing experts come from two different worlds.

"Programming and penetration testing are fields that require constant learning and unfortunately the two fields don't always line up," says Joshua Crumbaugh, founder of Nagasec, a security research and penetration testing firm. "Programmers want to write more secure code and penetration testers want to help protect the code. But most never have time to take away from their studies to cross train in the other field and though most penetration testers know some coding they tend to use very different styles, functions and even languages."

Long term, bridging that gap will require educational institutions to put a greater emphasis on information security across all IT and programming-based degrees, says Crumbaugh, who believes that some companies could even go so far as to encourage IT staffers and programmers to take penetration testing courses to understand the security mindset.

But that could take years. What can organizations do to start employing root cause analysis in the here and now? Right off the bat, Adams says organizations need to start with context.

"To leverage this vulnerability information to the fullest, you've got to make it contextual for the developer," he says. " Start by asking your developers if they understand enough about the vulnerability reported to fix the problem and code securely per your organizations policies and standards. Chances are the answer will be no."

To truly offer useful context, developers need to know at least five important things, Adams says. First, what the possible counter measures are to take against a particular vulnerability. Next, which counter measure is most appropriate for the application or is preferred by the organization. Then help the programmer understand whether the framework they're using has built-in defenses available. And finally, they should be shown how to implement counter-measures correctly, and how to code that counter-measure in the appropriate development language.

Additionally, organizations should start considering rethinking their definitions of terms as fundamental as "Quality Assurance" and "bugs."

"Organizations need to stop being afraid to redefine what ‘Quality Assurance’ means, because it sure doesn’t mean just functional code anymore," Johanson says. "Quality applications should function, but function securely."

Similarly, organizations tend to have an easier time fixing the root cause of security problems if they stop calling them vulnerabilities and start lumping them in with all the other bugs they fix.

"That’s how developers speak and they know how to triage, log and prioritize 'bugs,'" Adams says. "It sounds silly, but for whatever reason, when organizations treat security bugs differently, the process around remediation and triaging seems to stumble."

Organizations should also consider redesigning the way security teams interact with dev teams.

"Stepping up their game means bringing development and security together. Force them to be friends--well, not force--but they need each other," Johanson says. "The applications will only be as intelligent as the folks behind them."

Adams says that it helps if you visualize a pyramid with just a very small team of security professionals at the top and at the bottom an army of developers.

"High performance organizations nominate a security champion on each development team to interface with the security team. This is the middle layer of the pyramid," he says. "The important point is that the security champion is part of the development organization. That way you've got software engineers interacting with the army of developers."

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Apprentice
8/31/2013 | 4:24:46 PM
re: Getting To The Root Of Application Security Problems
The other thing to consider is that application security needs to morph so that it works with how applications are built today. Many applications are now developed using agile and are constructed of components, many of them open source. In fact, research shows that an application is typically comprised of 80-90% components. So application security approaches that rely on source code aren't that useful for companies constructing apps from third party components - they may not have the source code, and the are using the components so they don't have to deal with the source. For agile, if the application security process takes too long, or results in many false positives, it will hinder the development process - and you know what that means? Developers will just bypass the security process and seek forgiveness later. For more on this, check out this blog post on how application security needs to change to stay relevant.



Mark Troester
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/21/2020
Cybersecurity Bounces Back, but Talent Still Absent
Simone Petrella, Chief Executive Officer, CyberVista,  9/16/2020
Meet the Computer Scientist Who Helped Push for Paper Ballots
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/16/2020
Register for Dark Reading Newsletters
White Papers
Latest Comment: Exactly
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2020-09-22
All versions of package cabot are vulnerable to Cross-site Scripting (XSS) via the Endpoint column.
PUBLISHED: 2020-09-21
Inappropriate implementation in permissions in Google Chrome prior to 85.0.4183.83 allowed a remote attacker to spoof the contents of a permission dialog via a crafted HTML page.
PUBLISHED: 2020-09-21
Inappropriate implementation in Omnibox in Google Chrome on iOS prior to 85.0.4183.83 allowed a remote attacker to spoof the contents of the Omnibox (URL bar) via a crafted HTML page.
PUBLISHED: 2020-09-21
Insufficient policy enforcement in media in Google Chrome prior to 85.0.4183.83 allowed a remote attacker to leak cross-origin data via a crafted HTML page.
PUBLISHED: 2020-09-21
Insufficient validation of untrusted input in command line handling in Google Chrome on Windows prior to 85.0.4183.83 allowed a remote attacker to bypass navigation restrictions via a crafted HTML page.