Most security awareness advice is terrible, just plain bad, and not remotely feasible for your average user.

Rob Ragan, Principal Security Researcher at Bishop Fox

April 30, 2015

6 Min Read

While often viewed as the best defense against social engineering, security awareness training is ineffective and expensive. This topic has been hotly debated by security conference panels (like the one I am participating in at Interop 2015), and in various articles, but the focus has usually been on conducting or improving awareness training.

What if the debate’s focus was instead on resource allocation? Every organization, after all, has a finite budget allocated for information security resources. The question should not be “To conduct awareness training or to not conduct awareness training?” or “How do we improve awareness training so that it actually works?” Instead, let’s ask, “How will you invest your organization’s security resources?”

Strategic Defense: How Training Falls Flat
Open Source Security Testing Methodology Manual (OSSTMM 3) states that security provides "a form of protection where a separation is created between the assets and the threat." Realistically, we also need to detect and respond to active attacks, which leaves us with these 4 options:

  • Remove or reduce users’ access to sensitive assets, while still enabling users to conduct business (least privilege)

  • Create as many layers of separation between the attacker and the user as possible (defense in depth)

  • Train … and pray (security awareness)

  • Detect and respond to both successful attacks, and attacks in progress (incident response)

Essentially, we need to apply basic risk management techniques to an organization’s acceptable level of risk for defending against social engineering attacks.

Tactical Defense: Where Users Fit In
When it comes to social engineering attacks, users tend to assume the unfortunate role of scapegoat for an organization’s insecurity. Anything that requires users to “think” about security actively and constantly is making it their problem, instead of ours as security professionals.

The idea that users need to be “fixed” by security awareness training makes unfair assumptions about users’ desire and time to learn about security in the first place.

For example, my mother works for a multinational household-name corporation, and their security awareness training is required for all employees on an ongoing basis. Their training initiative works so well that she calls it “pishing.” (Note: Permission to use Momma as an example was granted at a Sunday evening dinner.)

That leaves the question: Well, what should we tell users? Should we inform them that it’s not safe to check email, browse the Internet, open PDFs, Microsoft Office documents, search Google for information, or use Facebook? Should we recommend that they stop using computers in general? We might as well prepare them for a quaint lifestyle.

Most security awareness advice is terrible, just plain bad, and not remotely feasible for your average user. The following advice, for instance, is not reliable or consistently repeatable without technical controls:

DHS-best-practices.png

Note how there are no examples of strong passwords listed, and it doesn’t provide instances of when you may be revealing too much personal information.

Training’s ROI and effectiveness is difficult to measure — especially if clear and concise security learning objectives were never defined and carried out with a well-thought-out plan. A research paper by Harvard sociologists examined how too much forced diversity training can cause the opposite desired effect, a situation that can be applied here. Forcing people to click through a computer-based training (CBT) does not have a positive ROI and 73 percent of organizations do not even track the ROI.

Rob continues the discussion about effective user awareness training today in an all-star Interop 2015 panel in Las Vegas entitled, Where Are the Weakest Links in Cyber Security?

On multiple social engineering engagements, we have successfully used the awareness training employees receive against the organization. During a recent social engineering call, we were asked by an employee, “Is this legit?” She proceeded to explain that she had just undergone training on how she should not run “suspicious” EXE files on her computer. We told her that as a follow-up to that training, we needed to ensure her computer was properly patched to prevent infection. We also added that she was saving us the effort of driving to her office by helping us out. Most people are inherently willing to be kind and helpful.

One of the largest compromises in recent memory, the Target Corporation breach, was initially caused by a phishing attack on a third party with network credentials. Even the most effective security awareness training in the world would not have prevented such an attack.

While security awareness training is required for compliance, it is rarely developed into a mature program or applied in a useful way. Yet in some cases, it is the only defense against social engineering. Rather than attempting to “fix” users, consider technical controls as an alternative and preferred investment to mitigate the risk of social engineering. We should not increase the frequency of security awareness training, but examine if we are investing in the best defensive techniques.

Strategic Next Steps: Technical Controls
Where should we invest our security resources in addition to security awareness? The answer is reliable and repeatable technical controls that enhance the incident response process. Let’s reduce the human element from our defenses, and instead focus on these 12 social engineering defenses your organization can use:

  1. Designate an alias for reporting incidents and enforce a process and policy for users to report all potential issues (e.g., [email protected]).

  2. Implement SPF, DKIM, and DMARC to prevent email (SMTP) spoofing. Currently, an overwhelming 99.83 percent of organizations can have email spoofed from their CEO to their entire workforce. 

  3. Disable HTML emails, which will prevent many of the tricks that hide malicious links in cloned emails. 

  4. Sandbox the browser and email client, and run them with non-execute, read-only, and limited-write privileges. 

  5. Use browser plugins to prevent the technical portions of typical social engineering attacks, for example password alert, ad block, noscript, noflash, and many more. 

  6. Track targeted users and infected systems with an organizational wide web proxy.

  7. Set up alerts for identifying new organization-relevant phishing sites. Monitor potential phishing domains with keywords related to the organization. Then use internal DNS servers to re-route potential phishing domains to a splash page warning of a potential dangerous site. 

  8. Reduce the risk of cloning with user customizations during an authentication process (e.g., user preselects an image or phrase and verify what they preselected during login) or two-factor authentication.

  9. Employ application whitelisting and network (TCP/IP) whitelisting on hosts that directly interact with sensitive data (e.g., PoS and bank teller terminals). 

  10. Encrypt sensitive data in transit and at rest. Make the attacker work that much harder to get to sensitive data once they’ve compromised the user. 

  11. Enforce a VPN connection when users are not on the internal network. 

  12. Perform regular simulated social engineering exercises to prepare the incident response team to learn and refine their approach.

All these actionable recommendations integrate with building an incident response plan for counteracting social engineering attacks. Remember these verbs: Prepare, Detect, Analyze, Contain, Eradicate, Recover, and Learn. With that in mind, ask yourself: Is your incident response team ready for the next social engineering attack? It’s not a matter of if, but when.

For more information on how to lessen the risk of social engineering, reach out to me on Twitter: @sweepthatleg or read Strategies to Mitigate Targeted Cyber Intrusions from Australia’s Department of Defence.

Special thanks to Fran Brown (@Tastic007), Matthew Parcell, Brenda Larcom (@asparagi), Alex DeFreese (@LunarCA), and Candis Orr (@Candysaur) for their feedback.

 

About the Author(s)

Rob Ragan

Principal Security Researcher at Bishop Fox

Rob Ragan is a principal researcher at Bishop Fox, where he focuses on solutions and strategy as well as fostering industry relationships. His areas of expertise include continuous penetration testing and red teaming. He is developing research to improve Bishop Fox's capabilities in scalable continuous assessment. Rob has presented at Black Hat, DEF CON, RSA, and Interop. He is a contributing author to Hacking Exposed Web Applications. His writing has appeared in Dark Reading and Help Net Security, and he has been quoted in publications such as Wired. Rob has 15 years of security experience. He was a senior penetration tester for Bishop Fox and managed the company's Atlanta and San Francisco teams. Rob also worked as a software engineer on security products at Hewlett-Packard's Application Security Center and at SPI Dynamics, where he helped develop the dynamic analysis engine for WebInspect and the static analysis engine for DevInspect.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights