We’re all familiar with the infamous quote: Insanity is doing the same thing over and over again and expecting different results.
News headlines from around the globe recently told the story of how hackers were able to steal hundreds of millions of dollars from banks using malware. The story was based on a report released by Kaspersky Labs.
The singular most important element of this breach is that it was done in a “low and slow” fashion over several months. The threat actors used extended term, flat and insidious attack and exfiltration methods that went virtually undetected via all defense-in-depth security solutions in place.
Think about that for a second. Banks are the most invested, mature, and aware institutions when it comes to stopping the bad guys. They put millions of dollars toward security infrastructure to guard against this type of loss (cash!). But still, the hack and loss cycle continues.
Hackers are getting really good at exploiting the most boring, repeatable, and accepted activities of normal employees behind the firewalls and advanced perimeter defenses they sit behind today. A simple phishing exploit is all it takes to break through the castle door or jump the moat. The holy grail of any breach is to get full access to an identity and its credentials to traverse the network, plant malware and call home.
Why not just stop the phishing attacks by training end users not to click on email that seems out of band or odd. Don't click on attachments or links to subscribe, login or answer any questions on web-based forms. While training is a noble pursuit and necessary, the phishing attacks just have to be right one time. When a threat actor launches an attack that is planned to take months or years to carry out, all they have to do is spam and wait. One new employee, one new contractor, one new business associate. That’s all it takes to p0wn a target. Keystroke loggers and botnet malware will do the rest. So what is the alternative to training and awareness?
The first, most critical aspect of recognizing these destructive attack patterns is to build a baseline around users and the applications, data and machines (networks) they access on a regular basis. But that alone isn't enough. For example, if Betty was phished on her second day on the job, looking at all her activity over a year may be meaningless from an anomaly perspective. She's been hacked all along and nothing would look out of the ordinary.
Instead, we can put Betty in a peer group to contrast her activity against others with her similar job function and role in the company. This way, peer group analytics can zero in on suspicious or outlier patterns that don't just rely on huge shifts in behavior (like massive downloads or unusual geo-location login activities). The low and slow activities of moving money around and using Betty's ID to go to unusual places it does not normally require to perform her job suddenly become threat indicators that are 'High' not 'Low' or 'Normal.'
The problem with the type of attack exposed in the Kaspersky report is that it blurs the line between cyber and insider threats. Hackers are focusing on and exploiting human factors as they know full well that most security tools aren't smart enough to put patterns of human behavior in context. That's why low and slow is so effective.
One way to detect these attacks is using a concept called “self-audit.” Imagine if Betty routinely received a credit card-like statement of her activity. It could highlight anomalous behaviors and enable her to automatically alert the security team if she noticed any transactions not made by her. This and other techniques can help transform humans from being a weak link to weapons in the fight against threats.
We can’t keep doing the same thing over and over again and expecting different results. We know attackers are exploiting human factors to cover their tracks. Once an identity is compromised, you can be dead certain that odd or deviant behavior patterns will show up. The type of activity that is out of band, which a typical employee just would never attempt. So let’s gather security intelligence that monitors and measures users’ behavior to identify risky events very early in the kill chain. Ideally, in the reconnaissance phase.