When it works, hybrid -- or 'glass-box' scanning -- combines dynamic, black-box analysis with static, white-box code analysis to find bugs and cut down on false positives

Dark Reading Staff, Dark Reading

February 3, 2012

4 Min Read

In the past, automated software security analysis took one of two tactics: static testing of source code to find probable defects or dynamic testing of the executable at runtime. In the battle between, what's known respectively as white-box and black-box security analysis, there was no middle ground.

Over the past year, however, that's changed: Software security companies have developed products that merge the two models, delivering hybrid analysis -- or "glass-box" -- systems that combine the expert-system analysis of white-box testing with the ability to evaluate any flaws discovered by testing them at runtime. The combination promises to find more vulnerabilities, cut down on false alarms, and deliver more relevant information on exploitability, says Patrick Vandenberg, program director for IBM Security.

"Independently, white-box and black-box have deficiencies," he says. "The more you blend these different methodologies, the more accurate and the more coverage you get."

Better testing techniques are needed. More companies have moved toward vetting their code for vulnerabilities. While companies are using static analysis as part of a move to a secure development methodology, catching bugs and fixing them before their software product ever ships, a hybrid method could save time by focusing their efforts on the most exploitable vulnerabilities, IBM's Vandenberg says.

"Developers still focus very much on features, features, features," he says. "More are becoming aware and are willing to do SDL, but it is very much in the minority."

While hybrid testing has been a topic of research for more than a decade, companies have pushed out a number of second-generation products that better combine the two standalone analysis models. IBM, HP Fortify, and others released their latest hybrid products at last year's RSA Conference.

“The correlation of both static and dynamic testing solutions increases the accuracy of vulnerability detection, reduction of both false-positives and false negatives, and broader coverage of the application,” said Joseph Feiman, vice president at business research firm Gartner, in a release announcing HP's latest product.

This year, the companies are pushing the products as better solutions than just black-box or white-box testing alone. Historically, each method has had its own strengths and weaknesses. Dynamic application security testing (DAST) can quickly enumerate the attack surface area of a program, using a variety of techniques to attack the software and find weaknesses and deliver low false-positive rates; yet runtime scanning sees only the surface symptoms of the vulnerabilities its find and will miss and flaws in portions of the program outside its expertise. Static application security testing (SAST) can more completely explore problems in source code and document the cause of defects; yet the process can also result in the reporting of a large number of false positives, finding defects that have no real impact on the security of a program.

Combining the two theoretically can improve on the weaknesses over using each technique alone. For example, IBM cites tests that show its glass-box system catching triple the vulnerabilities of a black-box system, with no false positives.

In practice, however, the combination of black-box and white-box testing can pose issues, says Chris Wysopal, chief technology officer for application security testing firm Veracode.

"It comes down to a question of, for this effort, how many extra bugs am I finding?" he says. "Could I have done something else with my time to get better results? Hybrid testing seems worthwhile, but it is limited and it's not a silver bullet."

Putting more effort into honing the analysis software behind static or dynamic testing can deliver better results, he adds. For example, hybrid testing finds correlations between statically found defects and runtime analysis, but correlation is not necessarily a gauge of whether a vulnerability can be exploited.

In addition, hybrid testing requires that an agent be placed on the system being tested, and that can be a problem, says Wysopal. He notes that the industry has been trying to move away from putting agents on systems to get security done.

"Whenever you install an agent, you take a performance hit, and it's possible that the agent is going to crash the machine," he says. "Anything that is an agent-based approach has a cost to it, so you have to ask is it worth it to do that."

The technique still has a lot of room for improvement, acknowledges IBM's Vandenberg. Yet as vendors improve static and dynamic analysis on their own, the combination of the two will become more effective, he says.

"In the hybrid discussion, there has been a lot of hype for a year, year-and-a-half," he says. "It's challenging, but we have the research resources to deal with it and to address security a whole."

Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights