User Ed: Patching People Vs VulnsHow infosec can combine and adapt security education and security defenses to the way users actually do their jobs.
I spend a lot of my work hours fretting about the effects of people who don’t specialize in computer and network security making security-related decisions. It hadn’t fully struck me how much of a deficit exists because people who are making business decisions about how to "do" security aren’t necessarily expert in education, usability, or linguistics.
At the recent Black Hat 2016 in Las Vegas, there were a few sessions in particular that brought this message home for me. The first talk was “Exploiting Curiosity and Context”, which was presented by Zinaida Benenson from the University of Erlangen-Nuremberg. The next one was “Security Through Design,” presented by Jelle Niemantsverdriet from Deloitte. The last was “Language Properties of Phone Scammers” by Judith Tabron from Hofstra University.
These three presenters dug into human error from different perspectives: how context can deceive, how poor design choices can lead people to make dangerous decisions, and how we can easily explain detectable patterns in telephone scams.
Benenson’s research involved a phishing test on a number of individuals, after which she surveyed them about the event. Most notably, many of the people who clicked were unaware that they’d done so, or had forgotten. Of those who were aware that they’d clicked, there was a variety of relatively unsurprising reasons given: they thought they knew the sender, they were curious, or they thought their software choices would protect them.
Niemantsverdriet discussed how design succeeds or fails depending on how well it addresses the way people actually use a thing. He listed a variety of ways that security companies could make changes to find and fix problems that exist in present products, such as: A/B testing, making user interfaces more intuitive, simplifying wording in products, and using strategic placement of options to influence user’s choices.
Tabron’s presentation analyzed recordings of scam calls to search for linguistic patterns that would help indicate when a caller is a scammer. She discovered that there are patterns that can be discerned by looking for irregularities -- such as in the caller’s speech pacing, excessive use of tag questions, or redirecting conversation to avoid answering questions and create a sense of urgency.
All of these speakers cautioned that while it is possible for people to be constantly in a “James Bond” mode of hypervigilance, it is not beneficial for the individual or for harmonious group dynamics to be in a constant state of distrust. The idea is not so much to eliminate security failures completely – even security experts make mistakes – but to find ways that make it easier for people to make better security decisions more often.
The first two speakers also highlighted the need to communicate with users so that those of us creating security policies and products can better understand their experiences. That way, education and security defenses can be adapted to how people actually do their jobs.
There is a tendency among security practitioners to get rather jaded and bitter; we assume, because our efforts to improve security are frequently unheeded or thwarted, that most users are so clueless that education is a lost cause. We often design products as if they are to be used by people who are security experts, or run once and never touched again. And when people fail to make good security decisions because they’re following (or creating) the path of least resistance we throw up our hands in disgust and declare, “You can’t patch stupid.”
In reality, creating a patch that doesn’t break existing functionality or create new vulnerabilities is a time-consuming and difficult task requiring people who are skillful coders -- and it’s something we often get wrong. Likewise, “patching” users requires skillful and thorough education by security professionals who understand how their users are expected to function and do their jobs.
Lysa Myers began her tenure in malware research labs in the weeks before the Melissa virus outbreak in 1999. She has watched both the malware landscape and the security technologies used to prevent threats from growing and changing dramatically. Because keeping up with all ... View Full Bio