Perimeter
2/19/2013
11:19 AM
Wendy Nather
Wendy Nather
Commentary
Connect Directly
RSS
E-Mail
50%
50%
Repost This

Rashōmonitoring

When you don’t know who to believe

There’s something to be said for pure, unprocessed data: You know it doesn’t come with any assumptions.

Here’s a simple example: Logs show use of an application from an executive’s phone in Maryland. They also show some failed login attempts from an unknown device in Tokyo, within two hours of the other events. Now, some analytics would assume that the second set of logs was an attack from Asia, and the APT ALARM would go off, with "hacking-back" teams, energy drinks, and virtual chest bumps all around.

But suppose the executive really was in Tokyo and had left her phone at home, where her 6-year-old picked it up and started playing with it. And because she’d left the phone at home, she was borrowing someone else’s iPad -- and, it being late at night after a liquid dinner, the login process just wasn’t working as well as it usually does.

Security products are featuring more analytics these days to help automate and speed the interpretation and response process -- and that’s good because humans are both (relatively speaking) slow and expensive. But any rules, algorithms, or interpretations of the data can also reflect the perspective and assumptions of whoever created them.

These perspectives can clash, as shown in Akira Kurosawa’s classic film "Rashōmon," in which the main characters all relate their versions of the same story. In the same way, analysts can put their own interpretations on security events, depending on their own states of knowledge and even the order in which they see the data. Here are some assumptions that you may want to take into account when using automated or manual analysis:

  • Anything that appears to originate from an IP address in Eastern Europe or China is Bad.
  • Traffic from a proxy means that someone is up to No Good.
  • Nobody ever shares an account.
  • Anything that overloads a system is a denial-of-service attack. Or it’s never a denial-of-service attack; it’s just a runaway process or memory leak.
  • All systems are using dependable time sources that have not been tampered with. (For some scary scenarios that contradict this assumption, see Joe Klein’s "Time Lord" presentation at ShmooCon last weekend.)
  • Deviations from a baseline are always Bad. (If that were the case, then online sales events would be something to avoid.)
  • A policy violation is always unauthorized. (See my post on the need for exceptions.)
  • An attack pattern or specific piece of malware that has been seen before is coming from the same threat actor.
  • The more sources of data you have that are saying the same thing, the more confidence you should have that it’s accurate.
  • In order to avoid falling victim to unconscious (or undocumented) assumptions, make sure you know the models behind your analytics. Are you using a product from a company that started in the defense sector? Is the statistical analysis intended to detect fraud in financial transactions, not overdue library books? Are you using statistical baselines that are out of date and don’t reflect your current application traffic? How are historical events weighted in analyzing new ones?

    I’m not saying that you should distrust your SIEM. But I am saying that you shouldn’t stop questioning it, or yourself. Once in a while, take a fresh look at your unfiltered data sources, shake up your reporting, and have a different person interpret the alerts in your SOC. Make sure that you haven’t become complacent in your everyday monitoring because what you see tends to become what you expect to see.

    (I would like to thank Sandy "Mouse" Clark at the University of Pennsylvania for her discussions on this topic; she’ll be coming out soon with new research around how assumptions affect security.)

    Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.

    Wendy Nather is Research Director of the Enterprise Security Practice at independent analyst firm 451 Research. With over 30 years of IT experience, she has worked both in financial services and in the public sector, both in the US and in Europe. Wendy's coverage areas ... View Full Bio

    Comment  | 
    Print  | 
    More Insights
    Register for Dark Reading Newsletters
    White Papers
    Flash Poll
    Current Issue
    Video
    Slideshows
    Twitter Feed
    Dark Reading - Bug Report
    Bug Report
    Enterprise Vulnerabilities
    From DHS/US-CERT's National Vulnerability Database
    CVE-2008-3277
    Published: 2014-04-15
    Untrusted search path vulnerability in a certain Red Hat build script for the ibmssh executable in ibutils packages before ibutils-1.5.7-2.el6 in Red Hat Enterprise Linux (RHEL) 6 and ibutils-1.2-11.2.el5 in Red Hat Enterprise Linux (RHEL) 5 allows local users to gain privileges via a Trojan Horse p...

    CVE-2010-2236
    Published: 2014-04-15
    The monitoring probe display in spacewalk-java before 2.1.148-1 and Red Hat Network (RHN) Satellite 4.0.0 through 4.2.0 and 5.1.0 through 5.3.0, and Proxy 5.3.0, allows remote authenticated users with permissions to administer monitoring probes to execute arbitrary code via unspecified vectors, rela...

    CVE-2011-3628
    Published: 2014-04-15
    Untrusted search path vulnerability in pam_motd (aka the MOTD module) in libpam-modules before 1.1.3-2ubuntu2.1 on Ubuntu 11.10, before 1.1.2-2ubuntu8.4 on Ubuntu 11.04, before 1.1.1-4ubuntu2.4 on Ubuntu 10.10, before 1.1.1-2ubuntu5.4 on Ubuntu 10.04 LTS, and before 0.99.7.1-5ubuntu6.5 on Ubuntu 8.0...

    CVE-2012-0214
    Published: 2014-04-15
    The pkgAcqMetaClearSig::Failed method in apt-pkg/acquire-item.cc in Advanced Package Tool (APT) 0.8.11 through 0.8.15.10 and 0.8.16 before 0.8.16~exp13, when updating from repositories that use InRelease files, allows man-in-the-middle attackers to install arbitrary packages by preventing a user fro...

    CVE-2013-4768
    Published: 2014-04-15
    The web services APIs in Eucalyptus 2.0 through 3.4.1 allow remote attackers to cause a denial of service via vectors related to the "network connection clean up code" and (1) Cloud Controller (CLC), (2) Walrus, (3) Storage Controller (SC), and (4) VMware Broker (VB).

    Best of the Web