Penetration Tests: Not Getting 'In' Is An OptionPen testers must get beyond just breaking in, and clients need to understand how the tester's results map to business risk
The goal of a penetration tester is often misunderstood. It isn't centered around getting shell on a client's critical Web server or domain admin within its Microsoft Active Directory environment. Sure, those things are fun and can cause us to dance around like tweens at a Katy Perry concert, but there's more to penetration testing. The purpose of the test is to demonstrate the risk and impact that existing vulnerabilities, misconfigurations, and lack of security awareness training can have on a business.
There was a conversation a few months ago on a penetration-testing mailing list about what to do when you don't get in. "In," in this context, is the coveted admin shell or domain admin credentials that show off what a master of pwnage you really are and gives that warm, fuzzy feeling that you kicked butt. While getting in is fun and obviously much more interesting than when you don't get in, the fact is that we can't get in every time, and there are good and bad reasons for that.
The most common roadblock is having a limited scope due to restrictions set forth by the client. For example, the client might have 42,000 employees and 1,337 external IP addresses where 97 percent of the IPs belong to live hosts running approximately 93 percent of the time, but it only wants you to test 12 of those IPs over a two-day period. Any chance the results of testing those 12 hosts will give perspective into the true risk the organization faces from its IT resources? Not likely.
But what if the entire network and users were within scope and you still couldn't get in? The first hurdle might be time and resources. We typically hear that malicious attackers have all the time in the world to get in, which is why they will always get in. It isn't unheard of for an attacker to spend weeks or months doing recon and planning an attack, but this isn't a luxury penetration testers have. There are deadlines, and time is money.
What if the target company has a mature security program? For example, let's say you encounter a security program that has been around for years with top-notch security professionals and a generous budget. They've locked down the perimeter, segmented the internal network properly, locked down the workstations, and performed highly effective user awareness training. I know it sounds like a pipe dream, but those places do exist, and what if you were lucky enough to come up against it?
One member of the list stated it perfectly by saying that in an environment like the one just described, you're not going to find the vulnerabilities that you'd be able to easily pick up using a vulnerability scanner. The goal at that point is to use the test to validate the program's efforts to be sure nothing is missed. Not getting in or finding any serious vulnerabilities shouldn't be a reflection on the penetration tester, although the report should accurately reflect the methodology, in detail, to help the client be sure the test was thorough.
Beyond just validation, start focusing on other areas that are not included in the usual network penetration test. For example, code reviews of internally developed Web apps and a Web application penetration testing against them would be a good start. Data leakage assessments are another interesting area that can involve everything from analyzing what's being posted on social media and company sites to sniffing the outbound corporate network traffic to see what's getting out.
Those are just a few ideas. In the end, the penetration test must provide value to the client and accurately reflect the risk posed to the business. Of course, the client should want and expect the same while not focusing on getting a checkbox filled on its compliance report (but we know how these things go). Just remember that not getting that shell or owning that DC is OK, but the client needs to understand why, know that its money was well-spent, and be provided with options for enhancing the current and future tests.
John Sawyer is a Senior Security Analyst with InGuardians. The views and opinions expressed in this blog are his own and do not represent the views and opinions of his employer. He can be reached at firstname.lastname@example.org and found on Twitter @johnhsawyer.