Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

1/16/2018
10:30 AM
Javvad Malik
Javvad Malik
Commentary
Connect Directly
LinkedIn
Twitter
RSS
E-Mail vvv
50%
50%

Mental Models & Security: Thinking Like a Hacker

These seven approaches can change the way you tackle problems.

In the world of information security, people are often told to "think like a hacker." The problem is, if you think of a hacker within a very narrow definition (e.g., someone who only breaks Web applications), it leads to a counterproductive way of thinking and conducting business.

A little knowledge is a dangerous thing, not least because isolated facts don't stand on their own very well. As legendary investor Charlie Munger once said:

Well, the first rule is that you can't really know anything if you just remember isolated facts and try and bang 'em back. If the facts don't hang together on a latticework of theory, you don't have them in a usable form.

You've got to have models in your head. And you've got to array your experience both vicarious and direct on this latticework of models. ...

[You've] got to have multiple models because if you just have one or two that you're using, the nature of human psychology is such that you'll torture reality so that it fits your models, or at least you'll think it does. …

This is worth bearing in mind for security pros.

When we look at the thought process of a (competent) security professional, it encompasses many mental models. These don't relate exclusively to hacking or wider technology, but instead cover principles that have broader applications.

Let's look at some general mental models and their security applications.

1. Inversion
Difficult problems are best solved when they are worked backward. Researchers are great at inverting systems and technologies to illustrate what the system architect should have avoided. In other words, it's not enough to think about all the things that can be done to secure a system; you should think about all the things that would leave a system insecure.

From a defensive point of view, it means not just thinking about how to achieve success, but also how failure would be managed.

2. Confirmation Bias
What people wish, they also believe. We see confirmation bias deeply rooted in applications, systems, and even entire businesses. It's why two auditors can assess the same system and arrive at vastly different conclusions regarding its adequacy.

Confirmation bias is extremely dangerous from a defenders' perspective, and it clouds judgment. This is something hackers take advantage of all the time. People often fall for phishing emails because they believe they are too clever to fall for one. Reality sets in after it's too late.

3. Circle of Competence
Most people have a thing that they're really good at. But if you test them in something outside of this area, you may find that they're not well-rounded. Worse, they may even be ignorant of their own ignorance.

When we examine security as a discipline, we realize it's not a monolithic thing. It consists of countless areas of competence. A social engineer, for example, has a specific skill set that differs from a researcher with expertise in remotely gaining access to SCADA systems.

The number of tools in a tool belt isn't important. What's far more important is knowing the boundaries of one's circle of competence.

Managers building security teams should evaluate the individuals in the team and build the department's circle of competence. This can also help identify where gaps are that must be filled.

4. Occam's Razor
Occam's razor can be summarized like this: "Among competing hypotheses, the one with the fewest assumptions should be selected."

It's a principle of simplicity that's relevant to security on many levels. Often hackers will use simple, tried-and-tested methods to compromise a company's systems: the infected USB drive in the parking lot or the perfectly crafted spearphishing email that purports to be from the finance department.

While there are also complex and advanced attack avenues, these are not likely to be used against most companies. By using Occam's razor, attackers can often compromise targets faster and cheaper. The same principles can and should be applied when securing organizations.

5. Second-Order Thinking
Second-order thinking means to consider that effects have effects. This forces you to think long-term when considering what action to take. The question to ask is, "If I do X, what will happen after that?"

It's easy in the security world to give first-order advice. For example, keeping up to date with security patches is good advice. But without second-order thinking, this can lead to poor decisions with unforeseen consequences. It's vital that security professionals consider all implications before executing. For example, "What impact will there be on downstream systems if we upgrade the OS on machine X?"

6. Thought Experiments
A technique popularized by Albert Einstein, the thought experiment is a way to logically carry out a test in one's own head that would be difficult or impossible to perform in real life. In security, this is usually used during "tabletop" exercises or when risk modeling. It can be extremely effective when used in conjunction with other mental models.

The purpose isn't necessarily to reach a definitive conclusion but to encourage challenging thoughts and to push people outside of their comfort zones.

7. Probabilistic Thinking (Bayesian Updating)
The world is dominated by probabilistic outcomes, as distinguished from deterministic ones. Although we cannot predict the future with great certainty, we often subconsciously make decisions based on probabilities. For example, when crossing the road, we believe there's a low risk of being hit by a car. The risk exists, but if you've looked for traffic, you are confident that you can cross.

The Bayesian method says that one should consider all prior relevant probabilities and then incrementally update them as newer information arrives. This method is especially productive given the fundamentally nondeterministic world we experience: we must use both prior odds and new information to arrive at our best decisions.

While there may not be a simple answer to what it means to "think like a hacker," the use of mental models to build frameworks of thought can help avoid the pitfalls associated with approaching every problem from the same angle.

I've listed seven mental models here, some which you may already be familiar with and others you could try. Please share any of your favorite security and hacker mental models and problem-solving techniques in the comments.  

Related Content:

Javvad Malik is a London-based IT Security professional. Better known as an active blogger, event speaker and industry commentator who is possibly best known as one of the industry's most prolific video bloggers with his signature fresh and light-hearted perspective on ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
How Attackers Could Use Azure Apps to Sneak into Microsoft 365
Kelly Sheridan, Staff Editor, Dark Reading,  3/24/2020
Malicious USB Drive Hides Behind Gift Card Lure
Dark Reading Staff 3/27/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
6 Emerging Cyber Threats That Enterprises Face in 2020
This Tech Digest gives an in-depth look at six emerging cyber threats that enterprises could face in 2020. Download your copy today!
Flash Poll
State of Cybersecurity Incident Response
State of Cybersecurity Incident Response
Data breaches and regulations have forced organizations to pay closer attention to the security incident response function. However, security leaders may be overestimating their ability to detect and respond to security incidents. Read this report to find out more.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-10940
PUBLISHED: 2020-03-27
Local Privilege Escalation can occur in PHOENIX CONTACT PORTICO SERVER through 3.0.7 when installed to run as a service.
CVE-2020-10939
PUBLISHED: 2020-03-27
Insecure, default path permissions in PHOENIX CONTACT PC WORX SRT through 1.14 allow for local privilege escalation.
CVE-2020-6095
PUBLISHED: 2020-03-27
An exploitable denial of service vulnerability exists in the GstRTSPAuth functionality of GStreamer/gst-rtsp-server 1.14.5. A specially crafted RTSP setup request can cause a null pointer deference resulting in denial-of-service. An attacker can send a malicious packet to trigger this vulnerability.
CVE-2020-10817
PUBLISHED: 2020-03-27
The custom-searchable-data-entry-system (aka Custom Searchable Data Entry System) plugin through 1.7.1 for WordPress allows SQL Injection. NOTE: this product is discontinued.
CVE-2020-10952
PUBLISHED: 2020-03-27
GitLab EE/CE 8.11 through 12.9.1 allows blocked users to pull/push docker images.