Cognitive bias can compromise any profession. But when cognitive bias goes unrecognized in cyber security, far-reaching and serious consequences follow.

Levi Gundert, Vice President of Intelligence and Risk, Recorded Future

May 15, 2014

5 Min Read

I distinctly remember the instructor’s mix of exasperation and inquisition. “He was acting suspicious?!” I was sitting in a Georgia classroom at the Federal Law Enforcement Training Center (FLETC) listening to a critique of my latest probable cause affidavit. I instantly realized my mistake. In the training scenario, the suspect was wearing warm clothing on a hot day, pacing, and avoiding eye contact during questioning. Instead of limiting the affidavit to my direct observations and detailing the suspect’s behavior, I inserted what I thought was a pithy conclusion.  

Cognitive bias affects everyone, and behavioral economists are continuously documenting its societal effects. Every profession likewise is influenced by cognitive bias. While the effects in criminal investigations are well documented, cyber security is a similar domain where the tendency to misinterpret data often leads to fallacious conclusions.  

A casual look at the list of cognitive biases should give you pause: anchoring, belief bias, confirmation bias, distinction bias, focusing effect, irrational escalation, and the list continues. We are all predisposed to these biases and tend to be overcritical of their effects in others, while minimizing their impact upon our own analytic faculties. For example, a few months ago I found myself examining a malware campaign that used multiple domains, all of which were registered with a Seychelles (an archipelago off of Africa’s Eastern coast) address. 

As I contemplated the Seychelles’ population size, I realized that I had recently observed additional malicious activity tied to Seychelles WHOIS registrant data. Similarly, I decided that those registering domains with Panamanian addresses also fit my evil-perpetrating model, based on prior knowledge and experience.

Thus with the help of my colleagues Jaeson Schultz and Andrew Tsonchev I collected all new domains registered with Seychelles or Panama addresses in the prior seven months and identified the incidence of customer Web blocks (Cloud Web Security). While I was confident we would find a block rate over 50%, the results did not support my assertion. Out of 19,557 Seychelles registrant domains, we blocked 337, which means less than 1% (.02%) were actually participating in malicious Web activity. The results were similar for Panama registrant domains. To be sure, we queried the same list of domains three months later to account for potential latency between domain registration and malicious use, and the results were consistent with our first query.

Now, data sources certainly matter. In this case the original domain lists may have been incomplete, and the domains may have been used for malicious campaigns in additional channels such as email. Regardless, I expected a high incidence of Web maliciousness based on a cognitive bias, specifically a confirmation bias.

In the realms of threat intelligence, incident response, and general network security monitoring, our profession suffers from cognitive biases just like any other profession. Yet the consequences of unrecognized cognitive biases in cyber security (and the resulting incorrect conclusions) may be more powerful and further reaching at this point in history.

How do companies compete with governments that are stealing intellectual property for economic competitive advantage? It’s a tough question, and before strategies are formed, corporate officers and board members first need to be able to answer with confidence the question: “How do we know who is behind this attack?” Sovereign nations have been asking the same question for millennia, but the Internet now facilitates a constant connection and higher degree of anonymity for talented and clever threat actors. Thus a centerpiece of foreign policy hinges on accurate conclusions driven by unbiased data analysis.

This is particularly true regarding attribution. Threat actors and cyber defenders operate in the context of a global Internet comprised of billions – soon to be trillions – of connected nodes. Identifying the person or party responsible for a specific cyber security event at a specific point in time is incredibly challenging, even for the most talented teams blessed with significant resources. This is true for every organization with an interest in identifying a deeper level of attribution, including geographic location and/or the individual or group responsible for a specific attack.

Last year Mandiant published the APT1 report -- a public watershed for cyber attack attribution -- which articulated the specific data and timeline that led to many of the report’s conclusions. Given the theme of the report, the supporting data was crucial to its credibility, and that data was not amassed overnight. If history is any indicator, successful attribution will continue to require prolonged time investments, sometimes even years.

Last year Sergio Caltagirone, Andrew Pendergast, and Christopher Betz released a paper entitled The Diamond Model of Intrusion Analysis. This remarkably succinct framework provides a consistent filter for malicious cyber event metadata. It is this type of framework that analysts must continually refer to while collecting and interpreting cyber attack data, in order to avoid unchecked cognitive bias.

bias-2.png

Decision makers desperately need finished intelligence and logical assertions to plot the future course of military action, corporate policy, and foreign policy. Operationally this equates to domains, IP addresses, infrastructure owners, malicious code, etc., and the facts should perform a report’s summation. As analysts, we should not be inserting conjecture masquerading as fact into reports, because it is damaging to our industry and it impedes our ability to work toward a more secure Internet. If we fail to articulate the facts around a malicious cyber event properly, avoidable conflicts may ensue, and ultimately our entire industry loses trust and credibility.

Cognitive bias is rarely intentional, but hopefully we can continue to look for and confront our own analytical mistakes -- assisted by a reliable framework -- in order to produce a better security product (in any form). Industry and government decision makers and the general public will benefit, which should lead to improved education and efforts around the cyberthreat landscape we are daily confronting.

About the Author(s)

Levi Gundert

Vice President of Intelligence and Risk, Recorded Future

Levi Gundert is the vice president of intelligence and risk at Recorded Future where he leads the continuous effort to measurably decrease operational risk for customers.

Levi has spent the past 20 years in both government and the private sector, defending networks, arresting international criminals, and uncovering nation-state adversaries. He's held senior information security leadership positions across technology and financial services start-ups and enterprises. He is a trusted risk advisor to Fortune 100 companies, and a prolific speaker, blogger, and columnist.

Previous industry roles include vice president of Cyber Threat Intelligence at Fidelity Investments, technical leader at Cisco Talos, and U.S. Secret Service Agent within the Los Angeles Electronic Crimes Task Force (ECTF).

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights