Facebook and Google can identify patterns of attack within their own data, but smaller businesses rarely see enough traffic to successfully identify an attack or warn users.

Elissa M. Redmiles, Researcher, Max Planck Institute for Software Systems

April 13, 2021

5 Min Read

As one of his first actions, President Joe Biden hired a team of cybersecurity experts to help the US defend against cybersecurity threats.

Experts are one approach to defense, but there might be a simpler answer: End-user organizations need to share their data to keep themselves, and their customers, safer.

Data is critical to defending against cybercrime and can be used to identify new forms of malware as they spread across the Internet. Data about people's usual behavior — where they typically log in from, whether they usually sign in on their phone or from a computer — can be used to protect user accounts.

Yet cybercrime data has long been hoarded by security vendors that feel their competitive advantage relies on their ability to protect themselves and their users better than their competitors.

This data hoarding leaves users at risk.

Companies like Facebook, Google, Microsoft, Disney, and Twitter use their data to identify when a login from your account seems suspicious and alert you to protect your account. It is common to receive an email from one of these entities warning, "Someone suspicious is trying to log in to your account. Is this you?"

Yet few of us receive comparable emails from the small business through which we buy children's toys, play games, or handle our personal finances. That's because these smaller companies don't have enough data to know which of their customers' logins are suspicious and which are not.

Large tech companies with billions of users can identify patterns of attack within their own data, but smaller businesses rarely see enough traffic to successfully identify an emerging attack.

Companies sharing cybersecurity data — for example, typical user behavior patterns that can be used to identify suspicious logins — is one way to solve this problem.

Sharing cybersecurity data is one way to solve this problem. This data can be attack reports, for example, what code a company used to defend against an attack, or a dataset of typical user behavior patterns, such as how often they mistype their passwords.

Some initiatives have tried to get companies to share cybersecurity data so that companies of every size can protect themselves and their users.

For instance, Facebook (disclosure, a company I've consulted for) runs the ThreatExchange program, which allows companies to conveniently and easily share threat data about malware and distributed denial-of-service attacks against their corporate infrastructure, among other kinds of information.

Even new cybersecurity laws have focused on data sharing aimed at corporate-wide threats. The Cybersecurity Information Sharing Act (CISA) was signed into law in 2015 to protect private companies from liability when sharing information about cybersecurity threats — and defenses against them — with the government.

While a step in the right direction, these initiatives tend to focus on large-scale attacks against a company — hacks like SolarWinds — not attacks against individual users, like when someone tries to log in to a personal account by guessing the password.

Even though there is overlap between the users of big companies' services and the customers of small businesses, the big companies aren't sharing their data. As a result, customers who use smaller businesses are left to fend for themselves.

A few companies are trying to change that. Deduce (disclosure, another company I've consulted for) created a data collective through which companies can share information about user's security-related behavior and logins.

In exchange for sharing data with the platform, companies get access to Deduce's repository of identity data from over 150,000 websites. They can use this shared data to better detect suspicious activity and alert their users, just like Microsoft and Google do using their own data.

In a different approach to helping businesses identify suspicious users, LexisNexis created unique identifiers for their clients' customers. Using these identifiers, their clients can share trust scores that indicate if a particular user is suspicious. If a suspicious user attempts to log in to a website, the site can block that user to keep themselves and their legitimate users safer.

This is a good start. The lack of cybersecurity data means that security experts lack confidence in their ability to protect Internet users, and even Caleb Barlow, IBM's former vice president of security, says the industry needs to change. More data is needed, and it needs to be shared.

For cybersecurity data sharing initiatives to succeed, we need to shift our mindset. End-user facing companies, both small and large, already share advertising data with each other, because they realize the value of shared data to generate insight into their customer's preferences is greater than the value of keeping the insights from their customer's data to themselves. We need to view cybersecurity data like advertising data: more valuable shared than hoarded.

Clear empirical evidence on the value of cybersecurity data sharing may be able to convince a majority of companies to share their data. Evidence might include measured increases in the number of threats detected using shared data or increases in brand sentiment from security features built using shared data.

While some of this evidence already exists — for example, my research shows significant increases in brand trust when users receive login notifications — more is needed to inspire a paradigm shift in our collective attitude toward cybersecurity data sharing. Perhaps then 2021 will be year without a repeat of the level of cybercrimes seen in 2020.

About the Author(s)

Elissa M. Redmiles

Researcher, Max Planck Institute for Software Systems

Dr. Elissa M. Redmiles is a faculty member and research group leader of the Safety & Society group at the Max Planck Institute for Software Systems. She is also the CEO of Human Computing Associates, a research consulting firm, and has served as a consultant and researcher at multiple institutions, including Microsoft Research, Facebook, the World Bank, and the Center for Democracy and Technology. Dr. Redmiles uses computational, economic, and social science methods to understand users' cybersecurity and digital safety decision processes. She has published more than 40 academic articles with over 800 citations. Dr. Redmiles' work has been featured in popular press publications such as the New York Times, Scientific American, Wired, Business Insider, Schneier on Security, and CNET and has been recognized with multiple awards including from USENIX Security and Facebook.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights