Researchers at Israel's Ben-Gurion University (BGU) have developed a framework for continuously evaluating the resilience of end users to phishing and similar social engineering attacks.
Unlike other security awareness evaluation techniques that rely heavily on questionnaires and the self-reported behavior of users, the new approach is based on actual data gathered from end user smartphones, PCs, network traffic to and from devices, and attack simulation.
In a presentation at the Black Hat USA event this week, Ron Bitton, principal research manager at BGU's cybersecurity research center, said the framework addresses some of the shortcomings of current approaches to evaluating user security awareness.
Often these approaches don't distinguish between different attack types or platforms, and are largely static in nature, he said. "Existing solutions have several limitations," Bitton noted. "For instance, questionnaires and surveys are based on the self-reported behavior of users. These methods tend to be very subjective and biased," he said.
Similarly, simulated attacks that are designed to measure how a user might respond to a social engineering scam tend to be affected by environmental factors. And forced participation in things like awareness-training workshops can often result in low user engagement.
In developing the new framework, the researchers first tried to understand all of the criteria required for a user to be truly security-aware, and then evaluated the importance of different criteria in mitigating different types of attacks.
To do the former, the researchers explored numerous social engineering case studies and identified the human vulnerabilities and technologies that adversaries tend to exploit in social engineering attacks, and the countermeasures than can be used to protect against such exploitation.
The exercise resulted in the researchers coming up with a list of 30 different criteria for a security-aware user. Examples of good security awareness include behavior such as: only downloading apps from trusted sources, not installing apps with dangerous permissions, using only HTTPS sites, avoiding sites flagged by the browser as being dangerous, updating passwords regularly, and not connecting unknown media - such as USB drives - to their computers.
Ranking and Evaluating
After identifying the criteria, the researchers developed a procedure for ranking the effectiveness of each user behavior in mitigating four different types of attacks: password attacks, application-based attacks, phishing, and man-in-the-middle attacks. They discovered that the actions required from a user to mitigate an attack are different for different classes of attacks.
In phishing attacks, for instance, a user could avoid sending sensitive information via HTTP and not insert private information on unvalidated websites. For MITM attacks, on the other hand, deleting unknown certificates from their device and not approving unknown digital certificates made the biggest difference.
After determining the criteria to measure and how to evaluate it, the BGU researchers developed two sensors for profiling user behavior. One was an endpoint agent that collected multiple sensor data from the device, including installed apps, app permissions, app source, ranking, mail activity, security settings, and social network activity.
The researchers also developed a less intrusive, network-based monitor that inspected traffic from and to the end-user device using various methods - including deep-packet inspection and assessment of app-level protocols. The sensors allowed the researchers to build detailed profiles of individual users in terms of their security awareness.
They also developed a simulated attack framework that implemented 20 different types of attacks, including permissions abuse and attacks involving malicious Word macros, PDF documents and phishing emails, and SMS messages.
The researchers tested the framework on 162 users during a seven- to eight-week period. Each individual first was asked to provide a self-evaluation questionnaire as a baseline. Based on the score derived from the framework each user's security awareness was categorized either as low, medium, or high. The researchers then calculated the success rate each user achieved in mitigating various attacks directed at them via the attack simulation framework.
Users with high scores were substantially more likely to mitigate attacks than users with lower scores, Bitton said. On the other hand, user classifications based on the security questionnaire had little correlation to how likely, or not, a user was in mitigating attacks.
"The self-reported behavior of subjects may differ significantly from their actual behavior," Bitton said. "In contrast, security-awareness scores derived from objective measures such as data collected from endpoint and network-based solutions are highly correlated to user success in mitigating social engineering attacks," he said.