Who is the first person who comes to mind when you think about someone who is security conscious or concerned about privacy? For many non-experts, early adopters of security and privacy features are often considered eccentric or nutty. In a study on encrypted email usage in the workplace, for example, Shirley Gaw and colleagues found that if someone used encrypted email but did not have a good reason for doing so, they were perceived by their colleagues as paranoid. My colleagues and I have found similar effects in our research.
When I worked at Facebook, for example, we ran an analysis of how one's friends use of security features (like two-factor authentication) affected one's own use of those features. We found a surprising effect: For the vast majority of people, social influence appeared to have a negative effect on one's own adoption of two-factor authentication. Indeed, the presence of just a few friends who used two-factor authentication made one less likely to use it themselves than a similar person who had no friends who used two-factor authentication.
Typically, social proof has a positive effect in technology adoptions — if more of my friends use X, I should be more likely to use X. For security, however, social influence does not have a positive effect until many of one's friends start using those tools (for example, as demonstrated by the recent WhatsApp exodus to Signal). Why?
There's a concept in social psychology known as an illusory correlation. Illusory correlations are like stereotypes for relationships between two variables — if enough of a certain type of person does activity Y or uses thing Z, then other people start to associate activity Y or thing Z as being only for that certain type of person. So, if only paranoid people use security, and I am not nutty or paranoid — well, security must not be for me. In short, early adopters of security and privacy tools may — subtly or overtly — be perceived by others as paranoid; in turn, this stigma can put non-experts off security.
One of the key drivers of this illusory correlation between security and paranoia is the assumption that only shady geeks care about security. There are number of reasons for this assumption. First, early adopters probably do care more about security than others. But this image has also been amplified by popular depictions of hackers and socially inept security geeks. Many people also see openness and transparency as counterarguments to privacy. Misguided as "I've got nothing to hide" may be, it's seen by many as a badge of honor. At best, this illusory correlation between security and paranoia is a silly reflection of the fallacies of relying solely on intuition; at worst, it is pernicious and harms us all.
Indeed, cybercrime is rampant. In 2020, McAfee estimated that the global economic damages caused by cybercrime to be $945 billion. A survey from the Pew Research Center suggests that as many as two-thirds of American Internet users have experienced data theft. Moreover, much of this cybercrime would be hamstrung if more people adopted expert-recommended security and privacy behaviors (for example, automatically patching software and using two-factor authentication as well as a password manager).
Yet, despite decades of improvements to the usability of expert-recommended tools and behaviors, they remain rare — in 2018, fewer than 10% of Google accounts had two-factor authentication enabled and, in a survey conducted by Google in 2019, only 15% of 3,419 respondents used password managers. While stigma is not the sole explanation for why adoption of expert-recommended security and privacy behaviors remains low, it's an important and neglected factor — social influence and word of mouth are key to the widespread adoption of any technology, and these effects are unlikely to be positive for behaviors associated with negative attributes like paranoia.
There is hope, though. My research suggests that the illusory correlation between security conscientiousness and paranoia does not hold for systems that are designed to be pro-social: that is, those that can be easily observed when used, that involve others in the process of providing security, or that allow us to act in benefit of others. For example, we found that the negative effect of social influence did not hold for Facebook's Trusted Contacts feature, in which people specify friends to vouch for them as a form of fallback authentication. In a follow-up experiment, we also found that showing people how many of their friends used extra security features significantly increased adoption of those features.
By making security more social, we start to associate it with more desirable social properties — e.g., altruism, leadership, and responsibility. And if people stopped viewing the security conscious as shady geeks and started viewing them as examples to follow, perhaps we can all be more secure together.