A social engineer points out gaping holes in businesses' human security and shares lessons learned from years of phishing research.

Kelly Sheridan, Former Senior Editor, Dark Reading

November 2, 2017

4 Min Read

Imagine this: A bank's vice president is at work on a day when he knows a penetration test is scheduled. The phone rings. He's expecting a call, so he picks up. It's someone who claims they're calling to fix his email. The VP proceeds to share sensitive company data over the phone, information an attacker could use to target his bank and other businesses as well.

The scenario comes from a real penetration test conducted by PeopleSec founder Joshua Crumbaugh, who says it "highlights a massive problem that happens on a daily basis." He will present a recording of the call, lessons learned, and best practices from years of social engineering research, during his Black Hat Europe session "How to Rob a Bank Over the Phone — Lessons Learned and Real Audio from an Actual Social Engineering Engagement."

There should have been "no way" that someone who knew about a penetration test, on a specific day, fell for a phishing attack, he says. It still happened, partly because Crumbaugh did his homework to establish a "pretext" for the conversation: a reason for calling, a background story to make himself seem credible and helpful to the target.

"Any social engineering engagement is only as effective as the data you have," he says.

Crumbaugh researched the bank and learned it used a small ISP for its email. The ISP had several users complaining about terrible service, lost messages, etc. Understanding this was an issue, he pretended to be a representative from the ISP when he called the bank. Because the VP believed he was calling to address email problems, he was willing to share any data to help.

"You can never underestimate a human's blindness when it comes to getting what they want," Crumbaugh points out. In this case, the target so badly wanted to fix the issue that he provided several pieces of sensitive information, including the bank's antivirus provider and managed services provider.

"Understanding these things from the outside makes it easy to bypass them once you get on the inside," he explains. "As soon as you know what controls they have in place, you know how to bypass it."

Crumbaugh has spent several years investigating social engineering tactics, and the past three focusing on phishing, to build out HumanSAMM: a project aimed to create a framework for human security concerns. A career in red teaming and penetration testing taught him human security is the "one flaw in every single organization" that has security leaders stumped.

"As much fun as it was to be able to walk around their security controls and get this level of access, it shed light on the fact there's a massive problem here that no one seems to have any way of fixing," he explains.

Years of research have led Crumbaugh to several best practices and a handful of surprises. For one, sales departments are typically most vulnerable: Employees are 400% more likely to click a phishing link than those in other departments, and not always because they're chasing leads.

"In general, they just click more on everything," he says. "Something about sales just makes them chronic clickers." Surprisingly, developers are the second-most-likely group to click into phishing attacks, which is risky since most have escalated rights on the network.

Business can lessen their risk with security awareness training programs, Crumbaugh says, but they need to be smart about it. One of the big problems is that many of these programs are "one size fits all" and train low-risk and high-risk employees in the same way.

"You can be the most secure user in the organization, sitting next to the least secure user in the organization, and you get the same amount of training," he explains. He recommends customized training, down to the individual user. If you don't have the technology for this, he advises creating lists of "risky" and "secure" users, and building programs for each category.

Another big mistake is failing to use metrics to learn who is falling for attacks and why. Most companies are very linear, Crumbaugh explains. They want to know whether someone clicked a bad link but don't look at the type of phish they fall for. These more detailed metrics could help businesses customize training by department and ultimately get better results.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Read more about:

Black Hat News

About the Author(s)

Kelly Sheridan

Former Senior Editor, Dark Reading

Kelly Sheridan was formerly a Staff Editor at Dark Reading, where she focused on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial services. Sheridan earned her BA in English at Villanova University. You can follow her on Twitter @kellymsheridan.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights