An annual survey of penetration testers finds that although machines can quickly find many classes of vulnerabilities, human analysts are still necessary to gauge the severity of discovered issues.

4 Min Read

Automated analysis tools excel at finding certain types of vulnerabilities — from cross-site scripting flaws to SQL injection and from misconfigured security headers to remote-file inclusion — but humans continue to be necessary to evaluate the severity of such flaws, according to an analysis of 2,500 penetration tests released on June 9.

In its annual "State of Pentesting 2020" report, security-services firm Cobalt.io found that about two-thirds of its penetration testing engagements involved testing either web applications or web-based application programming interfaces (APIs), with misconfigurations topping the list of security threats discovered in 2019, followed by cross-site scripting and authentication issues. Automated security testing continues to be an efficient way to find these issues, especially as 37% of application security practitioners have to deal with weekly or daily release cadences, the report states.

Yet humans are still needed to find more nebulous classes of vulnerabilities, such as business logic bypasses, race conditions, and attack chains that involve exploitation of multiple vulnerabilities, says Caroline Wong, chief strategy officer for Cobalt.io.

"Anyone who is only using people is missing out of efficiencies that can only be found by machine, and anyone that is only using machines is missing out on whole classes of vulnerabilities," she says. "Use scanners to find your low-hanging fruit, and then use that information to provide context for analyzing the risk posed by those issues."

The survey underscored that certain issues remain for automated scanning for vulnerabilities, including tuning the analysis and testing systems, triaging vulnerabilities, and providing additional context as to the risk that a particular vulnerability poses. In addition, business logic issues — such as manipulating the price of goods or abusing password recovery systems — pose problems for automated analysis, according to Cobalt.io.

"Scanners are not capable of manipulating business logic rules or identifying misuse of an execution flow," the report states. "The ability to identify this class of vulnerability requires a complete understanding of the web application and necessitates creative thinking."

Penetration testing, however, continues to have an uphill struggle to become more integrated into DevOps and other agile development cycles, according to another report. Only half of security operations centers typically have visibility into DevOps activities, and only a third of penetration tests results are shared between DevOps teams and security teams, according to Fortinet's "2019 State of DevOps Security Report."  

Overall, only 58% of development teams thought that they had caught the majority of vulnerabilities before their code went into production, according to that report. 

"Respondents have some negative feelings about security technology in general — perhaps because their success measurements focus on speed and efficiency over security," the Fortinet report states. "Specifically, when asked how security solutions can negatively impact success, respondents’ biggest complaints were business concerns: slowing of development cycles, increased complexity, and adoption of challenging security standards."

To some degree, Cobalt.io sees the relationship between application-security teams and development teams changing, albeit slowly. Because of the shift to integrating development and operations, more security teams are working together with their counterparts in software engineering, Cobalt.io's Wong says. 

More than three-quarters of companies report that the security and software engineering teams have a strong relationship, according to the report.

"While security people can try to help to find vulnerabilities, they simply cannot fix those vulnerabilities unless they are working collaboratively with software developers and engineering teams," she says. "This is really different from what that relationship looked like in the past."

The survey also found that the accidental misconfiguring of cloud infrastructure and cloud applications continue to be a major source of vulnerabilities. Misconfiguration continues to be the No. 1 issue found by penetration testers.

The continued forced errors have become even more critical because companies are relying on their cloud services and infrastructure to operate their businesses. Seventy-one percent of companies relied on cloud environments for their business, according to Cobalt.io's report.

Mature security programs are increasingly aiming to gain complete coverage with application-security testing and are focused on security rather than just compliance, according to Wong. 

"In the past, more people would say they do pen testing for compliance," she says. "Today, organizations are actually doing it because they want to make their applications more secure."

Related Content:

 

 

 

 

 

 

Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that "really bad day" in cybersecurity. Click for more information and to register

 

About the Author(s)

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights