A new report explores how attackers identify psychological vulnerabilities to effectively manipulate targets.

Kelly Sheridan, Former Senior Editor, Dark Reading

September 18, 2019

5 Min Read

"People make mistakes" is a common and relatable phrase, but it's also a malicious one in the hands of cybercriminals, more of whom are exploiting simple human errors to launch successful attacks.

The Information Security Forum (ISF) explored the topic in "Human-Centered Security: Addressing Psychological Vulnerabilities," a new report published today. Human vulnerabilities, whether triggered by work pressure or an attacker, can expose a company to cybercrime. As more organizations fear "accidental insiders," addressing these vulnerabilities becomes critical.

In its report, ISF cites a stat from FireEye, which last year reported 10% of attacks contained malware such as viruses, ransomware, and spyware. Ninety percent of incidents were more targeted — for example, impersonation scams, spear-phishing attacks, and CEO fraud.

"What was clear for me is that if we are going to really try to address some of the more emerging threats that are targeting individuals, then we need to understand some of the ways in which users behave and why they behave," says ISF managing director Steve Durbin. He points to a "total shift" in the way employees can be managed to optimize security. After all, he says, most don't turn up for work each day with the intent to cause harm to the the company.

The brain has to process a lot of information before it arrives at a decision; however, humans are limited in the amount of time they have to make a choice with the data they have. This is why the mind seeks cognitive shortcuts, or "heuristics," to alleviate the burden of decision-making. Heuristics help people more efficiently solve problems and learn new things, but they may lead to cognitive biases that contribute to poor judgment or mistakes in decision-making.

So long as companies don't understand the implications of cognitive biases, researchers say, they will continue to pose a significant security risk. ISF's report lists 12 biases, all of which can have different effects on security. One example is "bounded rationality," or the tendency for someone to make a "good enough" decision based on the amount of time they have to make it.

Bounded rationality can prove dangerous during a cyberattack, when tensions run high and an analyst may make a "good enough" decision based on the data and tools at their disposal.

Another bias commonly seen in the workplace is "decision fatigue," or a decrease in mental resources after a series of repetitive choices. At the end of a long day, employees tend to lean toward easier decisions, which may not be the best decisions. "The attacker knows by conducting the attack in late afternoon, it'll provoke poor decision-making," Durbin explains.

Creating the Attacker's Advantage
Each of these vulnerabilities gives attackers an opportunity to strike. While many of their tactics remain the same, they have also grown in sophistication and cost-effectiveness. Criminals can use "social power" to exert influence over others and manipulate them into making mistakes.

There are six different types of social power: reward power, which promises a reward if the task is complete; coercive power, which uses punishment to influence behavior; referent power, which uses the "cult of personality" to manipulate followers of celebrities; informational power, which uses specific information to convince a target the attacker is legit; and expert power, which attackers use to impersonate someone with expertise — someone who should be trusted.

Psychologically savvy attackers can leverage these tactics in several different types of attacks. Spear-phishing is most common and increasingly popular, says Durbin, but other techniques are becoming popular, too. Whaling, for example, is a type of phishing email designed to hit a single, high-value target, usually a senior executive or someone with privileges access. Criminals use a long-term approach, employing different forms of social power over a period of time to build credibility.

Baiting, another tactic, is similar to phishing but promises a reward to entice the target: Free music or movie downloads may be traded for credentials to a certain website. Smishing, or social engineering done via text messages, is likely to become much more popular as people are less aware of cyberattacks arriving via SMS. Vishing, or social engineering via phone, lets attackers use their voice to build a rapport. Some criminals are using AI to become more convincing.

"The phone has tended to be something that has remained out of the more commercial phishing and attack scenarios that we've seen," Durbin says. "We're starting to see it emerging now." And while the voice impersonation tactic requires access to the right technology, he anticipates this is an area that will grow. With the right tech, the attack isn't difficult.

What's important to remember about human-focused cybercrime is this isn't about employees being less intelligent or more negligent, he continues. "This is human nature. If you catch us on the wrong day or catch us in a certain way, we will behave accordingly," Durbin adds. "You don't actually know how the individual is feeling on a particular day."

What You Can Do
Researchers recommend reviewing your organization's security culture, starting from the most senior roles. This can inform a better understanding of how different departments value security and pinpoint which areas have more human vulnerabilities. From there, security leaders can identify threats, tailor responses, and help employees handle stressful situations.

Security admins should also aim to understand how employees use technology, controls, and data. Consider how these interactions may vary across locations and cultural settings, and brainstorm how controls and technologies can be designed around the person using them.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "Poll Results: Maybe Not Burned Out, But Definitely 'Well Done'."

About the Author(s)

Kelly Sheridan

Former Senior Editor, Dark Reading

Kelly Sheridan was formerly a Staff Editor at Dark Reading, where she focused on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial services. Sheridan earned her BA in English at Villanova University. You can follow her on Twitter @kellymsheridan.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights