Analytics // Security Monitoring
5/28/2013
04:18 PM
Wendy Nather
Wendy Nather
Commentary
50%
50%

The Network And Malware, Part Deux

Two analysts, one topic

This is a follow-on to my friend Mike Rothman's post (available here). We like joining forces on occasion, even though we're nominally competitors – and it's not just because in the infosec pop charts, analysts rank right down there with Big 4 auditors and the SEC. Think of it either as collusion to take over the industry, or a Tuesday night garage band jam session.

Mike talked about the first part of what most people see as a timeline for dealing with malware: ideally, you should be detecting it early so that you can stop it before it reaches a vulnerable target. The follow-on to that, of course, is not to have any vulnerable targets to hit. That's the Sisyphean task that makes being a CISO such an unpopular career choice: "Wanted: one martyr with 10 years of experience, to create and maintain perfect 24/7 defenses, despite the best efforts of all other stakeholders, without pissing them off. If you fail big enough, we'll send you packing."

So it's not too far of a stretch to say, as Richard Bejtlich has for years, that Prevention Eventually Fails. And it may be FUD to say that everyone is probably already compromised to some extent, but sometimes the only difference between FUD and reality is in how you present it. Either way, you need detection just as much as prevention, and believe it or not, this is the hard part.

As Mike pointed out, detecting evidence of malware that has already landed means looking for automated activity on the network. This activity can take the form of phoning home to a command and control unit, performing further reconnaissance through scanning and discovery, or exfiltrating data. It can also involve looking for configuration changes and the presence of artifacts on disks and in memory, so you should not assume that you're covered if all you do is network-based monitoring.

This may seem so basic as to be obvious, but when you're doing network-based malware detection, you're looking for current activity. This is not as easy as you'd think, because attackers have gotten very good at hiding it: disguising their traffic, obfuscating it, hitching a ride on legitimate traffic, using routes and protocols that you'd never suspect, and spreading it out over time so that you're less likely to piece it together. Malware has gotten so clever, for example, that it can find out where to phone home by executing a Google search on a specific term that will bring up a sponsored listing or ad on the sidebar of the results page that links to the instructions. Got all that? Yes, the attackers are fiendishly clever.

Because of this, after-the-fact malware detection on the network has to cover a wide range of data points. It needs to have historical information on what happened previously so that it can catch those dormant infections when they finally send out a beacon ("Give me a ping, Vasily – one ping only, please"). It also needs to understand baseline network traffic to spot anomalies, and crack open encrypted SSL traffic where possible to read the contents. Just figuring out what counts as anomalous traffic is a PhD dissertation in and of itself.

Of course, it also has to be able to look for what we already know is probably bad: signs and symptoms that have been seen before. And if you've been following along up to this point, you may think – as I always start thinking – "signature." But wait! Signatures are bad, right? They're useless for malware detection! Poor signatures have gotten beaten on so much that nobody wants to say the S-word any more. People would rather say "rules," "blacklists," "heuristics," "algorithms," or even "indicators of compromise" – and yes, all of these have differences from the definition of "signature," but when you break it down, you're still looking for something based on characteristics that you already know about.

Threat intelligence has blossomed as a market, and it's built into just about everything. It's the result of the hard work from carbon-based life forms who disassemble malware samples, profile the attackers who write and use the malware, and listen to Internet chatter. Turning intelligence into something that can tell you what to look for is what post-attack detection is all about. Sharing this data is vital, and collaborative malware platforms and schemas are out there for just this purpose. You can submit a malware sample, or a packet capture file, and get back information on whether someone has seen this before.

You'd guess that this intelligence involves mountains of data, and you're right. It goes beyond "big data." It's moby data. To do all this analysis of live network traffic, compare it with historical traffic, and analyze it using a huge store of intelligence is more than a mortal server can handle. Doing it at line speed requires specialized hardware. Refreshing that data so that it's as current as possible also requires more firepower than you can buy off the shelf. Doing all this while ignoring a denial-of-service attack that's trying to distract you from it – that's the holy grail.

Blazingly fast plus moby data equals "not in my datacenter." For the most part, nobody is going to build their own infrastructure to do this; that's why cloud-based monitoring and malware detection are on the rise. If you have a minimal presence to capture the network traffic, you can save all the frantic data crunching for the cloud back-end.

Speaking of horses already having left the barn, there's also another way to skin the malware detection cat.* Remember that a lot of your network traffic goes out on the Internet, and it can be logged and monitored. If some of your infrastructure is talking to known command-and-control systems, or other known compromised systems, that's a pretty good sign that you've got malware. There are security vendors out there that do this listening to the whispers and echoes on the Internet at large, and they can tell you whether you've been compromised – no software or hardware installation required. You can't get lighter-touch than that.

That's the upside. The downside is that there are people out there who already know you've been 0wn3d. If you do nothing else, it might be a good idea to go find them and ask.

*Yes, this is Cliche Menagerie as a Service.

Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy. Wendy Nather is Research Director of the Enterprise Security Practice at independent analyst firm 451 Research. With over 30 years of IT experience, she has worked both in financial services and in the public sector, both in the US and in Europe. Wendy's coverage areas ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Snorty
50%
50%
Snorty,
User Rank: Apprentice
5/30/2013 | 7:13:19 AM
re: The Network And Malware, Part Deux
What u need to look for are IOC's or Indicators of Compromise.
Having solid AMP software on the endpoint is another must tool to have in the aftermath.
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Security Operations and IT Operations: Finding the Path to Collaboration
A wide gulf has emerged between SOC and NOC teams that's keeping both of them from assuring the confidentiality, integrity, and availability of IT systems. Here's how experts think it should be bridged.
Flash Poll
New Best Practices for Secure App Development
New Best Practices for Secure App Development
The transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2017-0290
Published: 2017-05-09
NScript in mpengine in Microsoft Malware Protection Engine with Engine Version before 1.1.13704.0, as used in Windows Defender and other products, allows remote attackers to execute arbitrary code or cause a denial of service (type confusion and application crash) via crafted JavaScript code within ...

CVE-2016-10369
Published: 2017-05-08
unixsocket.c in lxterminal through 0.3.0 insecurely uses /tmp for a socket file, allowing a local user to cause a denial of service (preventing terminal launch), or possibly have other impact (bypassing terminal access control).

CVE-2016-8202
Published: 2017-05-08
A privilege escalation vulnerability in Brocade Fibre Channel SAN products running Brocade Fabric OS (FOS) releases earlier than v7.4.1d and v8.0.1b could allow an authenticated attacker to elevate the privileges of user accounts accessing the system via command line interface. With affected version...

CVE-2016-8209
Published: 2017-05-08
Improper checks for unusual or exceptional conditions in Brocade NetIron 05.8.00 and later releases up to and including 06.1.00, when the Management Module is continuously scanned on port 22, may allow attackers to cause a denial of service (crash and reload) of the management module.

CVE-2017-0890
Published: 2017-05-08
Nextcloud Server before 11.0.3 is vulnerable to an inadequate escaping leading to a XSS vulnerability in the search module. To be exploitable a user has to write or paste malicious content into the search dialogue.

Dark Reading Radio
Archived Dark Reading Radio
In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year.