Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News & Commentary

11:00 AM
Rob Ragan
Rob Ragan
Connect Directly
E-Mail vvv

Social Engineering Defenses: Reducing The Human Element

Most security awareness advice is terrible, just plain bad, and not remotely feasible for your average user.

While often viewed as the best defense against social engineering, security awareness training is ineffective and expensive. This topic has been hotly debated by security conference panels (like the one I am participating in at Interop 2015), and in various articles, but the focus has usually been on conducting or improving awareness training.

What if the debate’s focus was instead on resource allocation? Every organization, after all, has a finite budget allocated for information security resources. The question should not be “To conduct awareness training or to not conduct awareness training?” or “How do we improve awareness training so that it actually works?” Instead, let’s ask, “How will you invest your organization’s security resources?”

Strategic Defense: How Training Falls Flat
Open Source Security Testing Methodology Manual (OSSTMM 3) states that security provides "a form of protection where a separation is created between the assets and the threat." Realistically, we also need to detect and respond to active attacks, which leaves us with these 4 options:

  • Remove or reduce users’ access to sensitive assets, while still enabling users to conduct business (least privilege)
  • Create as many layers of separation between the attacker and the user as possible (defense in depth)
  • Train … and pray (security awareness)
  • Detect and respond to both successful attacks, and attacks in progress (incident response)

Essentially, we need to apply basic risk management techniques to an organization’s acceptable level of risk for defending against social engineering attacks.

Tactical Defense: Where Users Fit In
When it comes to social engineering attacks, users tend to assume the unfortunate role of scapegoat for an organization’s insecurity. Anything that requires users to “think” about security actively and constantly is making it their problem, instead of ours as security professionals.

The idea that users need to be “fixed” by security awareness training makes unfair assumptions about users’ desire and time to learn about security in the first place.

For example, my mother works for a multinational household-name corporation, and their security awareness training is required for all employees on an ongoing basis. Their training initiative works so well that she calls it “pishing.” (Note: Permission to use Momma as an example was granted at a Sunday evening dinner.)

That leaves the question: Well, what should we tell users? Should we inform them that it’s not safe to check email, browse the Internet, open PDFs, Microsoft Office documents, search Google for information, or use Facebook? Should we recommend that they stop using computers in general? We might as well prepare them for a quaint lifestyle.

Most security awareness advice is terrible, just plain bad, and not remotely feasible for your average user. The following advice, for instance, is not reliable or consistently repeatable without technical controls:

Note how there are no examples of strong passwords listed, and it doesn’t provide instances of when you may be revealing too much personal information.

Training’s ROI and effectiveness is difficult to measure — especially if clear and concise security learning objectives were never defined and carried out with a well-thought-out plan. A research paper by Harvard sociologists examined how too much forced diversity training can cause the opposite desired effect, a situation that can be applied here. Forcing people to click through a computer-based training (CBT) does not have a positive ROI and 73 percent of organizations do not even track the ROI.

Rob continues the discussion about effective user awareness training today in an all-star Interop 2015 panel in Las Vegas entitled, Where Are the Weakest Links in Cyber Security?

On multiple social engineering engagements, we have successfully used the awareness training employees receive against the organization. During a recent social engineering call, we were asked by an employee, “Is this legit?” She proceeded to explain that she had just undergone training on how she should not run “suspicious” EXE files on her computer. We told her that as a follow-up to that training, we needed to ensure her computer was properly patched to prevent infection. We also added that she was saving us the effort of driving to her office by helping us out. Most people are inherently willing to be kind and helpful.

One of the largest compromises in recent memory, the Target Corporation breach, was initially caused by a phishing attack on a third party with network credentials. Even the most effective security awareness training in the world would not have prevented such an attack.

While security awareness training is required for compliance, it is rarely developed into a mature program or applied in a useful way. Yet in some cases, it is the only defense against social engineering. Rather than attempting to “fix” users, consider technical controls as an alternative and preferred investment to mitigate the risk of social engineering. We should not increase the frequency of security awareness training, but examine if we are investing in the best defensive techniques.

Strategic Next Steps: Technical Controls
Where should we invest our security resources in addition to security awareness? The answer is reliable and repeatable technical controls that enhance the incident response process. Let’s reduce the human element from our defenses, and instead focus on these 12 social engineering defenses your organization can use:

  1. Designate an alias for reporting incidents and enforce a process and policy for users to report all potential issues (e.g., [email protected]).
  2. Implement SPF, DKIM, and DMARC to prevent email (SMTP) spoofing. Currently, an overwhelming 99.83 percent of organizations can have email spoofed from their CEO to their entire workforce. 
  3. Disable HTML emails, which will prevent many of the tricks that hide malicious links in cloned emails. 
  4. Sandbox the browser and email client, and run them with non-execute, read-only, and limited-write privileges. 
  5. Use browser plugins to prevent the technical portions of typical social engineering attacks, for example password alert, ad block, noscript, noflash, and many more. 
  6. Track targeted users and infected systems with an organizational wide web proxy.
  7. Set up alerts for identifying new organization-relevant phishing sites. Monitor potential phishing domains with keywords related to the organization. Then use internal DNS servers to re-route potential phishing domains to a splash page warning of a potential dangerous site. 
  8. Reduce the risk of cloning with user customizations during an authentication process (e.g., user preselects an image or phrase and verify what they preselected during login) or two-factor authentication.
  9. Employ application whitelisting and network (TCP/IP) whitelisting on hosts that directly interact with sensitive data (e.g., PoS and bank teller terminals). 
  10. Encrypt sensitive data in transit and at rest. Make the attacker work that much harder to get to sensitive data once they’ve compromised the user. 
  11. Enforce a VPN connection when users are not on the internal network. 
  12. Perform regular simulated social engineering exercises to prepare the incident response team to learn and refine their approach.

All these actionable recommendations integrate with building an incident response plan for counteracting social engineering attacks. Remember these verbs: Prepare, Detect, Analyze, Contain, Eradicate, Recover, and Learn. With that in mind, ask yourself: Is your incident response team ready for the next social engineering attack? It’s not a matter of if, but when.

For more information on how to lessen the risk of social engineering, reach out to me on Twitter: @sweepthatleg or read Strategies to Mitigate Targeted Cyber Intrusions from Australia’s Department of Defence.

Special thanks to Fran Brown (@Tastic007), Matthew Parcell, Brenda Larcom (@asparagi), Alex DeFreese (@LunarCA), and Candis Orr (@Candysaur) for their feedback.


Rob Ragan is a principal researcher at Bishop Fox, where he focuses on solutions and strategy as well as fostering industry relationships. His areas of expertise include continuous penetration testing and red teaming. He is developing research to improve Bishop Fox's ... View Full Bio
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
Joe Stanganelli
Joe Stanganelli,
User Rank: Ninja
5/19/2015 | 6:13:15 PM
Re: It can never be the user's fault

> "We can never put the blame on the user for a breach. It would be like blaming a driver for faulty brakes on his car."

Well, that's taking it a bit far, I think, depending upon the circumstances.

It's like an apartment or condo building where there are locks on the doors and you have to have someone you know buzz you in if you don't have a key or a fob.  Good security, right?  Unless all the residents just hold the door open and let people in.  When people's apartments start getting robbed by burglars who know all they have to do is wait outside for someone to come by and let them in, whose fault is it?
Joe Stanganelli
Joe Stanganelli,
User Rank: Ninja
5/19/2015 | 6:10:40 PM
Re: Training +
We absolutely need better security technology -- such as authentication measures like you suggest.

But authentication can sometimes be falsified, and no amount of technology is foolproof.  Your doors and locks and alarms won't work if one of your employees just holds the door open for an attacker.  Security awareness and training for all employees is imperative.
User Rank: Apprentice
5/8/2015 | 7:03:46 PM
Human error
This is a really good article. I agree we shouldn't blame the user, but instead educate as much as possible. I think we should add heightened physical security to the strategic next steps. We should make sure user lock up their computers when not in use by using automatic tools like Bluetooth proximity device. This will elimate human error.
User Rank: Moderator
5/5/2015 | 11:37:05 AM
Re: It can never be the user's fault
This is so right! You can't blame the users, as much as you might want to. The best thing to do is to constantly be thinking ahead and constantly be assessing the benefit and success of your security efforts. There's a great blog post about how to do that on the FireEye website. 


Karen Bannan. commenting on behalf of IDG and FireEye. 
User Rank: Apprentice
5/4/2015 | 7:00:46 AM
It can never be the user's fault
We can never put the blame on the user for a breach. It would be like blaming a driver for faulty brakes on his car.

Training is essential; we need to develop a common understanding about insecurities in surfing and email for instance. It's like teaching your kids to lock the door when leaving home.

But the rest is our job! We, the security professionals, it's our job to make security simple enough to be used. Users are under pressure to fulfill their job description and perform the tasks they have been employed to do. If security is in their way, they will find workarounds so that they can fulfill their tasks and still be home for dinner. It must be easier to do right than to do wrong, and the only ones that can make it easy are we that know security and tech.  If our tools are not good enough, change tools and put some pressure on the suppliers (spoiler, I am a supplier). 
User Rank: Ninja
5/3/2015 | 7:44:33 AM
Re: Training +
the right answer of course is that eMails need to be authenticated -- so that you can be sure you know who you are talking to

but that doesn't apply just to eMails,    Forms 1040, for example, are in desperate need of authentication

Secure Computing in a Compromised Environment

it is no secret that all of our usual identification data, -- name, address, social security number, date of birth, mother's maiden name, eMail address ... have been compromised and are commonly available in public .   see also Brian Krebs: SUPERGET article.

we require then a method of identification that can be used to prove the authenticity of a document in public but which at the same time allows us to retain control over the ability to provide such authentications

symmetric keys -- such as we have been using -- name, dob, ssan, ... don't work

fortunately there is a solution -- and it has been worked out many years ago by gentlemen such as Whitfile Diffie and Martin Hellman.  it is known as Public Key Encryption, or "PGP" for short

PGP is available as a product -- you can buy it from Symentaic as PGP/Desktop or you can use the Gnu Privacy Guard also known as GnuPG

popular mythology claims PGP is too difficult to use but this is simply not the case when PGP is incorporated as packaged technology

 for a good example of this I recommend learning to use the ENIGMAIL plug-in that is available on the Thunderbird eMail client.   the built in dialog will allow you to generate and sign keys, update the keyserver, sign and authenticate eMails,-- the whole 9 yards

questions need to be asked as to why we do not make better use of this technology.  some answers might not be too palatable.
Joe Stanganelli
Joe Stanganelli,
User Rank: Ninja
4/30/2015 | 11:24:38 PM
Re: Eliminating The Human Element?
To speak of SECTF, it's worth noting that at DEF CON 21, the worst-performing company target at SECTF was a tech company -- specifically, Apple!
Joe Stanganelli
Joe Stanganelli,
User Rank: Ninja
4/30/2015 | 11:19:37 PM
Of course, when talking training, it's important to note what is meant by that.

Security consultant Chris Hadnagy, for example advocates that companies send "fake" phishing emails to their employees -- and those who click the link are taken to a training site that forces them to immediately complete a very brief training session on IDing phishing.

This alone, according to Hadnagy, can reduce successful phishing attempts against an organization by more than 75 percent.
User Rank: Ninja
4/30/2015 | 5:02:21 PM
Eliminating The Human Element?
Ah, my favorite topic under the InfoSec banner.  One thing I find the most interesting is the difference in approach to the same topic between conferences like Interop and those like DEF CON.  On the one hand, the more professional approach is to figure out how to change the existing approach to training staff, how to "mix it up" and make it more entertaining and/or more easy to understand, and then on the other hand... Well, if you were at DEF CON 18 you know the results of the Social Engineering Capture the Flag sessions and why large organization who continue to follow "old-school" standards and methodologies still don't get it.  Lots of great ideas here in this DarkReading article, and I think that figuring out HOW to improve upon defense against social engineering will benefit from participating in, watching over and reading results of CTF sessions like the below and reviewing real-world data.  Personally, I'm all for eliminating the human element altogether :-) 

Of note, the primary findings of the SECTF at DEF CON 18 included the below, which actually is not very surprising, and in some ways seems to be less intuitive that what some organizations do or propose to do: 

·         For awareness training to be truly effective it requires complete coverage of all employees. In many instances contestants would contact call centers, which often do not have as complete of awareness training programs. This translated into information leakage that could have been avoided as well as significant increase of risk to the target organizations. Demonstration of the ineffectiveness of awareness training was apparent by the lack of employee resistance to answering questions.

·         When employees do not have clear guidelines set in place in response to a given situation, they will default to actions that they perceive as being helpful. This natural response was what was utilized in every instance where contestants obtained high scores.

·         Companies need to provide direction to employees on social media issues and expectations. Social media remains a low effort vector for information gathering that very few organizations are addressing.

·         Information perceived as having no value will not be protected. This is the underlying fact that most social engineering efforts rely upon, as value to an attacker is different than value to an organization. Companies need to consider this when evaluating what to protect, considering more than just the importance of value to the delivery of service, product, or intellectual property.

·         Organizations need to understand that regardless of the protections in place, information such as operating system, browser version and so on will be compromised. Security by obscurity is still not an option, as it oftentimes leads to a breach. Security through education must be the foundation of every solution -- education on the tactics, the methods and thinking of malicious. This education will inform all other actions, providing an increase in effectiveness.

Here are the flags they sought to (and very successfully did) capture, information fetched by phone:
  • In House IT Support? 
  • Trash Handling? 
  • How are Documents Disposed of? 
  • Who Does Offsite Back-Up? 
  • Employee Schedules? 
  • PBX System? 
  • Name of PBX? 
  • Employee Termination Process 
  • New Hire Process? 
  • Open a Fake URL 
  • What OS Used? 
  • What Service Pack? 
  • Mail Client? 
  • Version of Mail client? 
  • Anti-Virus Used? 
  • Computer Make and Model
  • Wireless On-Site?
  • ESSID Name?
  • Days of Months Paid?
  • Duration of Employment?
  • Shipping Supplier?
  • Time Deliveries Are Made?
  • Browser?
  • Version of Browser?
  • PDF Reader?
  • Version of PDF Reader?
  • Websites Blocked?
  • VPN In Use?
  • VPN Software?
  • Badges for Bldg Access?
  • Is there a Cafeteria? 
  • Who Supplies Food?

Check out Social Engineer (socialengineer [dot] org) for the full PDF report.  
User Rank: Strategist
4/30/2015 | 3:56:34 PM
Re: Replacing poor training with impractical technical controls is not a real solution.
Agree with PaxDominicus01 -- when going through Rob's suggestions, I thought along these same lines -- How would a average or below-average org ever evolve to implement these difficult-to-implement technical controls?

However, I am not a proponent of security awareness training. The Harvard study is fascinating, and from my experience, both relevant and true. Security awareness training is one of the least effective approaches.

The best approach is to leverage a fourth control type: deceptive controls. Rob mentions protective, detective, and responsive controls but fails to identify this fourth category. We must model users as insider threats. They are unintentional insiders.

There are four types of insiders: the primary, "malicious" (deliberate intent), and the three unintentional, "disdain of security practices", "careless", and "ignorant". The Harvard study would put unintentional insiders in the disdain category, while Rob identifies most users as ignorant. A security professional who gets phished (N.B., this has happened to me -- I am fine to admit as much) would fall into the "careless" category.

The controls identiifed by Rob are largely based on human checks and balances against automated controls and processes. I would especially note that they do not place the responders close to the events and incidents. Integration is a key element of a successful security control, which is again why I recommend deception systems. Give the threats what they want and watch what they do -- pull the plug when they go too far. Even better -- lead them down a path using breadcrumbs so that you know what they will do because you can predict their behaviors. Take advantage of the whole kill chain if you can supply a sufficiently-advanced deception system that supports each stage with fake or misdirected components. Think DataSoft Nova.

There is also a lack of the identification of the primary source of vulnerability during root-cause analysis and lessons learned. The users aren't using OpenLDAP passwords -- it's Active Directory that is guilty of compromise every time. When phished -- threats don't gain access to hardened Linux laptops: they are Windows 7 systems with AppLocker (or similar) at best, and even then -- Powershell (or modifications such as NetSPI ClickOne) sneaks right by all of these app-whitelisting, antivirus, and HIPS controls, even if and when they exist. PtT and PtH attacks are bringing down the house. Forcing unpriviledged users barely helps. It can't help. It won't.

I am talking about issues with Windows laptops and Microsoft Window Server forests. Users need a new way of accessing their existing resources. I suggest U2Fs in smartcard mode for these Windows domains -- completely replace and turn off NTLM and Kerberos. Apple should force a minimum 5-character PIN along with a forced Lockdown cert on all iDevice installs -- and Android should follow this as a standard authentication (i.e., Secure Enclave) model. Basic entry points such as these must be protected. Old ways must be abandoned. Why don't we have dynamic identifiers to replace our SSNs and payment card numbers yet? Contactless payments are not innovating as far as risk reduction; they are increasing the risk. Nobody knows that their EMV card or compatible device, tag, or hand implant can be cloned or proxied. They just think "it's a secure method of payment because it came from the bank".
Page 1 / 2   >   >>
US Formally Attributes SolarWinds Attack to Russian Intelligence Agency
Jai Vijayan, Contributing Writer,  4/15/2021
Dependency Problems Increase for Open Source Components
Robert Lemos, Contributing Writer,  4/14/2021
FBI Operation Remotely Removes Web Shells From Exchange Servers
Kelly Sheridan, Staff Editor, Dark Reading,  4/14/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-04-20
A vulnerability of Helpcom could allow an unauthenticated attacker to execute arbitrary command. This vulnerability exists due to insufficient authentication validation.
PUBLISHED: 2021-04-20
vscode-restructuredtext before 146.0.0 contains an incorrect access control vulnerability, where a crafted project folder could execute arbitrary binaries via crafted workspace configuration.
PUBLISHED: 2021-04-20
** UNSUPPORTED WHEN ASSIGNED ** The AdTran Personal Phone Manager software is vulnerable to an authenticated stored cross-site scripting (XSS) issues. These issues impact at minimum versions 10.8.1 and below but potentially impact later versions as well since they have not previously been disclosed....
PUBLISHED: 2021-04-20
** UNSUPPORTED WHEN ASSIGNED ** The AdTran Personal Phone Manager software is vulnerable to multiple reflected cross-site scripting (XSS) issues. These issues impact at minimum versions 10.8.1 and below but potentially impact later versions as well since they have not previously been disclosed. Only...
PUBLISHED: 2021-04-20
** UNSUPPORTED WHEN ASSIGNED ** AdTran Personal Phone Manager 10.8.1 software is vulnerable to an issue that allows for exfiltration of data over DNS. This could allow for exposed AdTran Personal Phone Manager web servers to be used as DNS redirectors to tunnel arbitrary data over DNS. NOTE: The aff...