How Our Behavioral Bad Habits Are a Community Trait and Security ProblemHow Our Behavioral Bad Habits Are a Community Trait and Security Problem
Learn to think three moves ahead of hackers so you're playing chess, not checkers. Instead of reacting to opponents' moves, be strategic, and disrupt expected patterns of vulnerability.
December 14, 2022
Many an article chronicles hacked passwords available in bulk on the evil "Dark Web." It's presented as evidence that the bad behavior of users is the root of all hacking. But as a former red teamer, the end user isn't the only one who is a prisoner to discernable behavioral patterns.
There is a "pattern of vulnerability" in human behavior extending far beyond end users into more complex IT functions. Finding evidence of these patterns can give hackers an upper hand and speed the timeline of compromise.
It's a reality I recognized earlier in my career in operational roles. I've physically helped rebuild and relocate data centers and rewire buildings from top to bottom. It gave me a great perspective of what it takes to build in security from scratch, and how unconscious behaviors and preferences can put it all at risk. In fact, understanding how to identify these patterns gave me a very reliable "superpower" when I moved into red teaming, which ultimately resulted in a patent grant. But more on that later.
Let's start by examining how our addiction to patterns betrays us — from credentials, to software operation, to asset naming.
While technology has afforded us so many benefits, the complexity of managing it — and the cumbersome controls intended to protect — drives people to repeatable patterns and the comfort of familiarity. The more regular the task or function becomes, the more complacent we get with the pattern and what it telegraphs. For a red teamer, the ability to watch routines, from the physical to the logical, can offer a wealth of intelligence. Repeatability offers opportunity and time to discern patterns, and then to find the vulnerability in those patterns that can be exploited.
Internal naming schemes in particular — be they asset names, system names, or credential groupings — lend themselves to picking common words for descriptive categorization. I saw one organization that used the names of mountains. And while you may not know which system K2 versus Denali is, it acts as a filter for an attacker as they explore an environment. It's also an excellent social engineering tool, allowing an attacker to "speak the internal IT lingo." You may ask, OK, but "what's in a name?"
I'm sure you've heard of brute-force attacks where attackers throw guesses in volume at a target to find the right combination that leads to access. It's a numbers game and a blunt instrument. However, if you can discern the use of naming conventions, it sharpens the ability to focus on a range of accounts or systems, and then understand their potential attributes. It speeds up the clock for an attacker.
But, you ask, "if these are internal conventions, how does an external attacker even find this type of information"?
Enter my aforementioned superpower. As any experienced red teamer knows, information leaks out of organizations in many ways, you just need to know where to look, and how to find the signals in the noise.
Internal naming groups and conventions become exposed to the outside world in a variety of ways. They're buried in website code, detailed in technical documentation or as part of APIs, or just simply published in public system information.
Actions Speak Louder
So, we've established how our deeply rooted behaviors can betray our security literally with "writing on the wall." How do we change, or at least be more aware of, our very nature?
There's the old joke summed up in the punch line that you don't need to outrun a tiger — you just need to outrun your companions. In this way, first use the basic "sneaker" technologies. Password managers, multifactor authentication (MFA), and the like at least allow you to outrun your peers so attackers can focus on the laggards in the herd.
Second, elect for regular change. Change is uncomfortable, but that discomfort triggers better situational awareness. If you know yourself and your environment better, and force change, that helps prevent an attacker's ability to get to know you too well.
Next, trust your gut. If something doesn't seem right, it probably isn't. If you focus on failure and not the familiarity of the behavior around failure, you're better equipped to see the bad guys coming and make sure a small anomaly doesn't become a big problem.
Finally, play chess, not checkers. Too many organizations think they're playing chess, and may be employing more complex pieces and roles, but if, ultimately, you're playing in reaction to your opponents' moves, it's checkers in disguise.
It's a lesson I am teaching my own son while he's interested in learning chess. He's learning the strategy behind the game. He understands using the pieces and their characteristics to manipulate the game, and is quickly catching on to the fact that he also needs to focus on manipulating me. I'm teaching him to think three moves ahead, think about what is possible, and lure his opponent into doing what he wants them to do, not what they want to do — and, most importantly, to trust the gambit.
About the Author(s)
You May Also Like
Reducing Cyber Risk in Enterprise Email Systems: It's Not Just Spam and PhishingNov 01, 2023
SecOps & DevSecOps in the CloudNov 06, 2023
What's In Your Cloud?Nov 30, 2023
Everything You Need to Know About DNS AttacksNov 30, 2023