Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Risk

8/30/2013
05:00 PM
Connect Directly
Twitter
RSS
E-Mail
50%
50%

Getting To The Root Of Application Security Problems

Lack of root cause analysis following vulnerability testing keeps app sec teams treating symptoms rather than the disease of insecure coding

Though many enterprises invest in security testing ranging from automated vulnerability scans to full-out penetration testing, in rare instances do organizations do root cause analysis on the results and feed that information back into the application development lifecycle. Many experts within the security community say that lack of root cause analysis is keeping application security stuck in a rut.

"In most cases, organizations focus on solving the symptom and very rarely focus on addressing the underlying causes of their vulnerabilities," says Ryan Poppa, senior product manager for Rapid7. "Vulnerabilities are an unfortunate part of the development cycle, but it should be taken by organizations as an opportunity to learn and to move the ball forward, instead of just being seen as a cost to the business that needs to be minimized."

According to Caitlin Johanson, security architect at Veracode, all too often organizations continually let history repeat itself rather than learning from ghosts of application security past. This is why organizations continue to see the same types of vulnerabilities crop up over and over again in their code, even within newly developed applications.

"Your best practices should always be based on your worst failures. If you have reported incidents and keep tripping over the things you swept under the rug, it’s time to face the music and develop applications the right way," Johanson says.

[Are you missing the downsides of big data security analysis? See 3 Inconvenient Truths About Big Data In Security Analysis.]

According to Ed Adams, CEO of Security Innovation, while automated scanners have made it easy for a fairly low-level technician to identify potential problems in software, an organization needs someone who has an understanding of how the system functions and access to the source code to fix the problem.

"Here's the rub: recent research has shown that developers lack the standards and education needed to write secure code and fix vulnerabilities identified via security assessments," he says, recent research his firm commissioned from the Ponemon Institute.

Further exacerbating the problem is the issue of false positives, which "drives developers nuts" as they chase down non-existent problems at the expense of building out future functionality in other projects. Meanwhile, the remediation guidance provided by most tools is watered down to the point of ineffectiveness.

"[The tools are] often just regurgitating stock CWE content which has no context for a developer and is agnostic to the language and platform in which the developer is coding," he says. "This means it's mostly useless and tools often become shelfware."

More problematic, though, is that the skillsets and ethos held by coders and those held by vulnerability scanning or penetration testing experts come from two different worlds.

"Programming and penetration testing are fields that require constant learning and unfortunately the two fields don't always line up," says Joshua Crumbaugh, founder of Nagasec, a security research and penetration testing firm. "Programmers want to write more secure code and penetration testers want to help protect the code. But most never have time to take away from their studies to cross train in the other field and though most penetration testers know some coding they tend to use very different styles, functions and even languages."

Long term, bridging that gap will require educational institutions to put a greater emphasis on information security across all IT and programming-based degrees, says Crumbaugh, who believes that some companies could even go so far as to encourage IT staffers and programmers to take penetration testing courses to understand the security mindset.

But that could take years. What can organizations do to start employing root cause analysis in the here and now? Right off the bat, Adams says organizations need to start with context.

"To leverage this vulnerability information to the fullest, you've got to make it contextual for the developer," he says. " Start by asking your developers if they understand enough about the vulnerability reported to fix the problem and code securely per your organizations policies and standards. Chances are the answer will be no."

To truly offer useful context, developers need to know at least five important things, Adams says. First, what the possible counter measures are to take against a particular vulnerability. Next, which counter measure is most appropriate for the application or is preferred by the organization. Then help the programmer understand whether the framework they're using has built-in defenses available. And finally, they should be shown how to implement counter-measures correctly, and how to code that counter-measure in the appropriate development language.

Additionally, organizations should start considering rethinking their definitions of terms as fundamental as "Quality Assurance" and "bugs."

"Organizations need to stop being afraid to redefine what ‘Quality Assurance’ means, because it sure doesn’t mean just functional code anymore," Johanson says. "Quality applications should function, but function securely."

Similarly, organizations tend to have an easier time fixing the root cause of security problems if they stop calling them vulnerabilities and start lumping them in with all the other bugs they fix.

"That’s how developers speak and they know how to triage, log and prioritize 'bugs,'" Adams says. "It sounds silly, but for whatever reason, when organizations treat security bugs differently, the process around remediation and triaging seems to stumble."

Organizations should also consider redesigning the way security teams interact with dev teams.

"Stepping up their game means bringing development and security together. Force them to be friends--well, not force--but they need each other," Johanson says. "The applications will only be as intelligent as the folks behind them."

Adams says that it helps if you visualize a pyramid with just a very small team of security professionals at the top and at the bottom an army of developers.

"High performance organizations nominate a security champion on each development team to interface with the security team. This is the middle layer of the pyramid," he says. "The important point is that the security champion is part of the development organization. That way you've got software engineers interacting with the army of developers."

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
marktroester
50%
50%
marktroester,
User Rank: Apprentice
8/31/2013 | 4:24:46 PM
re: Getting To The Root Of Application Security Problems
The other thing to consider is that application security needs to morph so that it works with how applications are built today. Many applications are now developed using agile and are constructed of components, many of them open source. In fact, research shows that an application is typically comprised of 80-90% components. So application security approaches that rely on source code aren't that useful for companies constructing apps from third party components - they may not have the source code, and the are using the components so they don't have to deal with the source. For agile, if the application security process takes too long, or results in many false positives, it will hinder the development process - and you know what that means? Developers will just bypass the security process and seek forgiveness later. For more on this, check out this blog post on how application security needs to change to stay relevant.

http://blog.sonatype.com/peopl...

Thanks,

Mark Troester
Sonatype
@mtroester
Why Cyber-Risk Is a C-Suite Issue
Marc Wilczek, Digital Strategist & CIO Advisor,  11/12/2019
Unreasonable Security Best Practices vs. Good Risk Management
Jack Freund, Director, Risk Science at RiskLens,  11/13/2019
Breaches Are Inevitable, So Embrace the Chaos
Ariel Zeitlin, Chief Technology Officer & Co-Founder, Guardicore,  11/13/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-19010
PUBLISHED: 2019-11-16
Eval injection in the Math plugin of Limnoria (before 2019.11.09) and Supybot (through 2018-05-09) allows remote unprivileged attackers to disclose information or possibly have unspecified other impact via the calc and icalc IRC commands.
CVE-2019-16761
PUBLISHED: 2019-11-15
A specially crafted Bitcoin script can cause a discrepancy between the specified SLP consensus rules and the validation result of the [email protected] npm package. An attacker could create a specially crafted Bitcoin script in order to cause a hard-fork from the SLP consensus. All versions >1.0...
CVE-2019-16762
PUBLISHED: 2019-11-15
A specially crafted Bitcoin script can cause a discrepancy between the specified SLP consensus rules and the validation result of the slpjs npm package. An attacker could create a specially crafted Bitcoin script in order to cause a hard-fork from the SLP consensus. Affected users can upgrade to any...
CVE-2019-13581
PUBLISHED: 2019-11-15
An issue was discovered in Marvell 88W8688 Wi-Fi firmware before version p52, as used on Tesla Model S/X vehicles manufactured before March 2018, via the Parrot Faurecia Automotive FC6050W module. A heap-based buffer overflow allows remote attackers to cause a denial of service or execute arbitrary ...
CVE-2019-13582
PUBLISHED: 2019-11-15
An issue was discovered in Marvell 88W8688 Wi-Fi firmware before version p52, as used on Tesla Model S/X vehicles manufactured before March 2018, via the Parrot Faurecia Automotive FC6050W module. A stack overflow could lead to denial of service or arbitrary code execution.