Over the past year, the sheer number of ransomware attacks have increased dramatically, with organizations of all stripes being affected: government entities, educational institutions, healthcare facilities, retailers, and even agricultural groups.
While the bulk of the media attention has been on critical infrastructure and large organizations, attackers are not limiting themselves to just those types of victims.
“That’s really just the tip of the iceberg,” says Max Heinemeyer, director of threat hunting at Darktrace. “We see not just big names being hit. It's basically any company where adversaries think they can pay the ransom. Anybody who's got money and running some kind of digital business is basically in the crosshairs.”
What’s even more concerning – more than the fact that pretty much any organization can be targeted – is that ransomware attacks are evolving rapidly to add new capabilities. Where past attacks involved one – or a handful – of compromised machines, attacks now take down whole networks. Where the malware focused on just encrypting files and making them inaccessible, now the malware exfiltrates the data outside the network. Gangs now threaten secondary attacks on top of the initial infection, such as launching denial-of-service attacks or dumping the files in public. The latter action would expose the organization to a whole other set of problems associated with the data breach.
There is a tendency to assume that ransomware gangs always follow a set script when designing their attacks. However, the “professionalization” of the ransomware landscape means these attackers have their own supply chain to work with.
“They have specialized penetration operators to hack into systems, they buy access to networks, and they have negotiators to discuss ransoms,” Heinemeyer says.
Ransomware gangs don’t always use phishing, exploit zero-days, or abuse supply chains, either, he adds.
"They go with whatever their hackers bring on board,” Heinemeyer says. "If [hackers] want to use Cobalt Strike, they use Cobalt Strike. Or they can use their own malware. If they prefer domain fluxing, they use domain fluxing. If they are very adept at social engineering, they’re going to use that. If they buy access on the Dark Web, such as access cookies or pr-compromised systems, they can use that.”
While random and opportunistic attacks still exist, these gangs are increasingly researching the targets beforehand to find the suitable attack method.
“You think, ‘Oh, my God, that's 1995-style, but it still works because there's so many companies out there that are vulnerable. They have open infrastructure, or they run on edge systems,” Heinemeyer says. But the gangs don’t have to stick with just one attack method. They are taking the time to understand the networks they are targeting and can swap out tools as needed.
The industry tends to predefine the threat -- “Mimikatz is the latest rage, or this version of Cobalt Strike” -- and focus the solutions on those elements, Heinemeyer says.
“You don't want to have your domain controller have an open RDP port without any brute-force protection now. And you don't want to have an unpatched Exchange server that didn't get patched," he explains. "But for most organizations, there is the problem of what to do next: Should I create more security awareness campaigns because phishing is the latest thing? Should I increase my patch cycles or get more threat intelligence?”
Heinemeyer cautions against relying too much on defining what the attack would look like. Defenders focusing only on techniques, tools, and procedures (TTPs) and indicators of compromise (IoCs) are likely to see only legacy ransomware and attacks that are utilizing already-known methods.
“There’s no longer any common modus operandi anymore,” he says. “We [the industry] try to extrapolate tomorrow’s attack from yesterday’s attacks: Let’s look at yesterday’s threat intelligence. Let’s look at yesterday’s rules. There are attacks leveraging HTTPS – let’s focus on monitoring HTTPS. But now, even more in today’s dynamic threat landscape, that just doesn’t hold up anymore. Tomorrow’s attackers can use techniques that were never applied before. And that is where security teams struggle, because they invest in the latest trends based on what they listen to.”
Is AI the Answer?
“How can you defend against something that is unpredictable?” Heinemeyer asks. The answer, as he sees it, is harnessing artificial intelligence (AI) to grasp all the possibilities and find relationships that human analysts and traditional security tools like firewalls would miss.
“It is super important to understand what the AI does,” Heinemeyer says. “AI is not pixie dust. We don't just use it because it's a buzzword.”
Heinemeyer differentiates between AI and supervised machine learning, which relies on a large set of data to train the data to find and recognize patterns. So if the AI sees sufficient emails in its training data, when presented with a new piece of mail, it can tell whether it may be malicious. Supervised machine learning looks for things that are similar to previous things, but that doesn’t address the question of finding new things. That’s where unsupervised machine learning comes in – “and it is still very hard to get it right,” Heinemeyer says.
With unsupervised machine learning, or self-learning, there is no training data.
“You take the AI, you put it into an environment, and instead of saying these are examples of web-app exploits, and these are examples of phishing emails, and these are examples of malicious domains, we let the AI see the data, software, service data, email, communication, network data, endpoint data, and learn on the fly,” Heinemeyer says. “The AI understands what normal means for everything it sees and can then spot various deviations from that."
In other words, the AI is contextualized.
"It's specific to your environment," Heinemeyer says. "The AI learns that you normally use Teams, upload things to your CRM, go on Twitter, work in a certain time zone, and use Office 365. With self-learning, the AI learns from life and not based on previous attack data or based on what happened in other organizations.
“If all of a sudden, you receive an email that looks very out of place to your previous communication, you visit a link on that email and go to a website that you never visit before in a manner that is unusual for you and your peer group, then you download something that is super weird, and you start scanning the whole infrastructure and use SMB to encrypt data, which you never do, on servers you never touch – all of these things are not predefined, but put together, they look like an attack, they smell like an attack, and they walk like an attack,” Heinemeyer says.
AI can identify the attack even if it has never been seen before, even if there’s no signature, or if there’s a zero-day vulnerability being exploited.
Can AI Stop Ransomware?
It’s one thing to detect an attack that hasn’t been seen before. But can AI stop ransomware? Heinemeyer says it can.
“Many people think, ‘When I want to stop the ransomware, I have to stop the encryption process,’ but most people forget that a ransomware attack is, first and foremost, a network intrusion,” Heinemeyer says. “There's many steps coming before -- somebody has to get in [to the network] somehow. They have to find a way to your domain controller to deploy the ransomware, and they have to get to the right network segment.” There are more steps if they are multistage attacks, such as exfiltrating the data to outsider servers or publicly shaming the organization.
Many of these attacks happen over a handful of days, such as over the weekends, bank holidays, or after hours, to reduce the response time from human teams. The attack may start on Friday night and the data is encrypted by Sunday, Heinemeyer says.
“There are a lot of chances to disrupt the ransomware attack before encryption actually happens,” he adds.
If the AI can detect these early signs before encryption starts, the attack can be stopped by evicting the attackers, Heinemeyer says.
“You can maybe prevent the phishing email from being clicked, or you can stop the lateral movement from happening," he says. "Maybe you can kill the command control process. You contain the attackers by killing network connections.”
Perhaps there was no time for early indicators because all the attack pieces were already in place, or it was launched by an insider. It may be difficult to differentiate between somebody clicking on a button to start the attack from a legitimate backup process, Heinemeyer says. Self-learning AI has more context to be able to tell when that encryption is not a normal process. Even if the AI couldn’t detect the attack before, it can stop the encryption by killing the system process and blocking network connections. Perhaps the local files get encrypted, but blocking networking connections means the network shares do not. That minimizes the damage the organization has to deal with.
Self-learning AI detects attacks in areas humans may miss because there are just so many things to keep track of, and it can respond faster than humans.
“These attacks happen at machine speed, faster than any human team can react,” Heinemeyer says. “So you need to contain it and stop it from doing damage. Get the human team time to then come in with incident response to uncover the root cause.”
AI Can Scale, Humans Can’t
“Security never was a human scale problem. It is too complex,” Heinemeyer says, noting that even when most enterprise workloads were on-premises, it was very difficult to know ins and outs and understand the attack surface. The enterprise environment is now more complicated, with on-premises vying with cloud platforms, bring-your-own-device challenges, supply chain attacks, insider threats, and risks associated with outsourcing to third-party providers.
“There's so many things that complicate this further," he says. "Getting everything right with security was always hard in an on-premises network. Getting everything right now, where you can’t even put the finger on where you start and your suppliers start, is impossible for humans.”
People understand what network attacks look like – when somebody clicks on a phishing email, malware gets installed. That malware moves around, exfiltrates data, and encrypts it. Try to extrapolate that to cloud environments, and it becomes harder to visualize what an attack against cloud systems look like. Most security teams have never seen what a compromise against an Amazon Web Services instance looks like, let alone have to deal with that, Heinemeyer says.
“It’s not just a technology problem. It’s a scale problem. And it’s not a human-scale problem to understand this, stay up-to-date, and keep current," Heinemeyer says. “The complexity has exploded. Complexity killed the cat."