Analytics // Security Monitoring
5/28/2013
04:18 PM
Wendy Nather
Wendy Nather
Commentary
Connect Directly
RSS
E-Mail
50%
50%

The Network And Malware, Part Deux

Two analysts, one topic

This is a follow-on to my friend Mike Rothman's post (available here). We like joining forces on occasion, even though we're nominally competitors – and it's not just because in the infosec pop charts, analysts rank right down there with Big 4 auditors and the SEC. Think of it either as collusion to take over the industry, or a Tuesday night garage band jam session.

Mike talked about the first part of what most people see as a timeline for dealing with malware: ideally, you should be detecting it early so that you can stop it before it reaches a vulnerable target. The follow-on to that, of course, is not to have any vulnerable targets to hit. That's the Sisyphean task that makes being a CISO such an unpopular career choice: "Wanted: one martyr with 10 years of experience, to create and maintain perfect 24/7 defenses, despite the best efforts of all other stakeholders, without pissing them off. If you fail big enough, we'll send you packing."

So it's not too far of a stretch to say, as Richard Bejtlich has for years, that Prevention Eventually Fails. And it may be FUD to say that everyone is probably already compromised to some extent, but sometimes the only difference between FUD and reality is in how you present it. Either way, you need detection just as much as prevention, and believe it or not, this is the hard part.

As Mike pointed out, detecting evidence of malware that has already landed means looking for automated activity on the network. This activity can take the form of phoning home to a command and control unit, performing further reconnaissance through scanning and discovery, or exfiltrating data. It can also involve looking for configuration changes and the presence of artifacts on disks and in memory, so you should not assume that you're covered if all you do is network-based monitoring.

This may seem so basic as to be obvious, but when you're doing network-based malware detection, you're looking for current activity. This is not as easy as you'd think, because attackers have gotten very good at hiding it: disguising their traffic, obfuscating it, hitching a ride on legitimate traffic, using routes and protocols that you'd never suspect, and spreading it out over time so that you're less likely to piece it together. Malware has gotten so clever, for example, that it can find out where to phone home by executing a Google search on a specific term that will bring up a sponsored listing or ad on the sidebar of the results page that links to the instructions. Got all that? Yes, the attackers are fiendishly clever.

Because of this, after-the-fact malware detection on the network has to cover a wide range of data points. It needs to have historical information on what happened previously so that it can catch those dormant infections when they finally send out a beacon ("Give me a ping, Vasily – one ping only, please"). It also needs to understand baseline network traffic to spot anomalies, and crack open encrypted SSL traffic where possible to read the contents. Just figuring out what counts as anomalous traffic is a PhD dissertation in and of itself.

Of course, it also has to be able to look for what we already know is probably bad: signs and symptoms that have been seen before. And if you've been following along up to this point, you may think – as I always start thinking – "signature." But wait! Signatures are bad, right? They're useless for malware detection! Poor signatures have gotten beaten on so much that nobody wants to say the S-word any more. People would rather say "rules," "blacklists," "heuristics," "algorithms," or even "indicators of compromise" – and yes, all of these have differences from the definition of "signature," but when you break it down, you're still looking for something based on characteristics that you already know about.

Threat intelligence has blossomed as a market, and it's built into just about everything. It's the result of the hard work from carbon-based life forms who disassemble malware samples, profile the attackers who write and use the malware, and listen to Internet chatter. Turning intelligence into something that can tell you what to look for is what post-attack detection is all about. Sharing this data is vital, and collaborative malware platforms and schemas are out there for just this purpose. You can submit a malware sample, or a packet capture file, and get back information on whether someone has seen this before.

You'd guess that this intelligence involves mountains of data, and you're right. It goes beyond "big data." It's moby data. To do all this analysis of live network traffic, compare it with historical traffic, and analyze it using a huge store of intelligence is more than a mortal server can handle. Doing it at line speed requires specialized hardware. Refreshing that data so that it's as current as possible also requires more firepower than you can buy off the shelf. Doing all this while ignoring a denial-of-service attack that's trying to distract you from it – that's the holy grail.

Blazingly fast plus moby data equals "not in my datacenter." For the most part, nobody is going to build their own infrastructure to do this; that's why cloud-based monitoring and malware detection are on the rise. If you have a minimal presence to capture the network traffic, you can save all the frantic data crunching for the cloud back-end.

Speaking of horses already having left the barn, there's also another way to skin the malware detection cat.* Remember that a lot of your network traffic goes out on the Internet, and it can be logged and monitored. If some of your infrastructure is talking to known command-and-control systems, or other known compromised systems, that's a pretty good sign that you've got malware. There are security vendors out there that do this listening to the whispers and echoes on the Internet at large, and they can tell you whether you've been compromised – no software or hardware installation required. You can't get lighter-touch than that.

That's the upside. The downside is that there are people out there who already know you've been 0wn3d. If you do nothing else, it might be a good idea to go find them and ask.

*Yes, this is Cliche Menagerie as a Service.

Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy. Wendy Nather is Research Director of the Enterprise Security Practice at independent analyst firm 451 Research. With over 30 years of IT experience, she has worked both in financial services and in the public sector, both in the US and in Europe. Wendy's coverage areas ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Snorty
50%
50%
Snorty,
User Rank: Apprentice
5/30/2013 | 7:13:19 AM
re: The Network And Malware, Part Deux
What u need to look for are IOC's or Indicators of Compromise.
Having solid AMP software on the endpoint is another must tool to have in the aftermath.
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Flash Poll
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-3352
Published: 2014-08-30
Cisco Intelligent Automation for Cloud (aka Cisco Cloud Portal) 2008.3_SP9 and earlier does not properly consider whether a session is a problematic NULL session, which allows remote attackers to obtain sensitive information via crafted packets, related to an "iFrame vulnerability," aka Bug ID CSCuh...

CVE-2014-3908
Published: 2014-08-30
The Amazon.com Kindle application before 4.5.0 for Android does not verify X.509 certificates from SSL servers, which allows man-in-the-middle attackers to spoof servers and obtain sensitive information via a crafted certificate.

CVE-2010-5110
Published: 2014-08-29
DCTStream.cc in Poppler before 0.13.3 allows remote attackers to cause a denial of service (crash) via a crafted PDF file.

CVE-2012-1503
Published: 2014-08-29
Cross-site scripting (XSS) vulnerability in Six Apart (formerly Six Apart KK) Movable Type (MT) Pro 5.13 allows remote attackers to inject arbitrary web script or HTML via the comment section.

CVE-2013-5467
Published: 2014-08-29
Monitoring Agent for UNIX Logs 6.2.0 through FP03, 6.2.1 through FP04, 6.2.2 through FP09, and 6.2.3 through FP04 and Monitoring Server (ms) and Shared Libraries (ax) 6.2.0 through FP03, 6.2.1 through FP04, 6.2.2 through FP08, 6.2.3 through FP01, and 6.3.0 through FP01 in IBM Tivoli Monitoring (ITM)...

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
This episode of Dark Reading Radio looks at infosec security from the big enterprise POV with interviews featuring Ron Plesco, Cyber Investigations, Intelligence & Analytics at KPMG; and Chris Inglis & Chris Bell of Securonix.