Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint

7/21/2015
10:30 AM
Simon Crosby
Simon Crosby
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

Times Running Out For The $76 Billion Detection Industry

The one strategy that can deliver the needle to the security team without the haystack is prevention.

Enterprises spend a mind-boggling $76 billion each year to “protect” themselves from cyber-attacks, but the bad guys keep winning because most protection solutions are based on detection instead of prevention. The 2015 Verizon Data Breach Investigation Report highlighted over 2,100 breaches and the FBI claims that every major U.S. company has been compromised by the Chinese – whether they realized it or not.

What’s wrong? The answer is the same today as it was in ancient Troy when the Greek army suddenly disappeared, leaving behind an innocent-looking horse that the Trojans willingly brought inside the gates. The enemy had changed shape, avoiding detection. And so it is today: Verizon found that 70- to 90 percent of the malware used in successful breaches last year was unique to the attacked organization. Today’s detection-centric tools mistakenly assume that malware, or techniques used in an attack, will be used elsewhere. We read the results in the press and it isn’t pretty.

Detection is a flawed protection strategy
Detection will fail – with certainty. The proof dates back to Turing’s work in 1936 on the Halting Problem and Alonzo Church’s work on undecidable problems, meaning it is impossible to determine if code is malicious or not with 100 percent certainty.

Some security vendors claim to have developed “advanced threat detection” or “new math” but this is deliberately misleading; they are secretly delighted with the status quo. That’s because detection serves their commercial goals to advance a narrative that organizations are pitted against sophisticated foes whose subterfuge demands continued diligence and adaptation. They use this to absolve themselves of responsibility when detection fails, and to bolster the marketing appeal of their “next gen” products. There is “always a way in” and “no silver bullet.” Homilies don’t help.

Absurdly enough, these same vendors debase the language of security, promising to stop breaches, and secure the enterprise – when they cannot. Others, focused on remediation and forensics, sell the equivalent of cyber indulgences to absolve these victims of the sin of poor security practices.

Detection fails in two ways – with unexpected consequences:

  • We all understand the obvious (and inevitable) consequence of failing to detect an actual attack - a “false negative” – that lets the bad guy in. An example is an IPS that cannot see inside encrypted TLS web traffic, given that more than 70 percent of attacks use TLS – as close an analogy to the Trojan Horse as one could want.
  • Another, more prevalent failure mode is just as bad: State of the art IPS systems bury “true positives” in a haystack of (up to 1,000 times as many) false alarms. A recent Ponemon study found security teams investigate only 4 percent of alerts. Security teams scurry about remediating non-attacked systems, losing focus and wasting enormous time and money, and in the fuss may fail to notice signs of an actual attack. Last year’s breach of Target is a good example because they did not respond to the alerts.

Detection is a failed detection strategy (sic)
Building a good detector requires careful tuning with real-world attacks. But in today’s cyber-scape polymorphic and crypted malware changes shape hourly. It is impossible to adapt a detector at the same speed. Stated mathematically:

“[For malware of size n bytes] …The challenge … is to model a space on the order of 28n to catch attacks hidden by polymorphism. To cover 30 byte [malware] decoders requires 2240 potential signatures. For comparison there exist an estimated 280 atoms in the universe.”

Vendors that claim that detection is a tool to find compromised systems to “reduce dwell time” find that their detection tools are as poor at identifying successful attacks as they are at stopping them.

Detection is a failed strategy
The only viable alternative to detection is to make systems “secure by design.” Network micro-segmentation would have easily defeated the Target attack. Micro-virtualization enables endpoints to hardware-isolate each task that processes untrusted content, defeating each attack automatically. An architecture that rigorously enforces the principle of least privilege is widely recognized in the domain of human security – for example in intelligence work, and more widely in society.

The only way to survive in an untrusted world is to enforce least privilege and to never trust the untrustworthy. Hardware isolation transforms security, and server hypervisors and clouds, and micro-virtualized endpoints can both secure themselves and ensure that there is never any need to trust a detector.

As it turns out, in the context of resilient, self-remediating endpoints, it is possible to eliminate false positives, identifying actual attacks with uncanny precision – in other words, to deliver the needle to the security team, without the haystack.

[Read an opposing view favoring detection over prevention by Josh Goldfarb in Detection: A Balanced Approach For Mitigating Risk.]

Simon Crosby is co-founder and CTO at Bromium. He was founder and CTO of XenSource prior to the acquisition of XenSource by Citrix, and then served as CTO of the Virtualization & Management Division at Citrix. Previously, Simon was a principal engineer at Intel where he led ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
KevinF351
50%
50%
KevinF351,
User Rank: Apprentice
8/6/2015 | 8:35:57 AM
Good apart from the advertising at the end
Some great ideas and commentary, shame it ended with such a blatant plug for his companies solution!
RyanSepe
50%
50%
RyanSepe,
User Rank: Ninja
7/31/2015 | 10:53:06 AM
Re: Superscript?
Yes agreed. Slight oversight. More people than atoms doesn't make sense.
B_SeeMore
50%
50%
B_SeeMore,
User Rank: Apprentice
7/23/2015 | 9:59:54 AM
Superscript?
Your intimidating statistic in the "atoms in the universe" quote is slightly less intimidating without the supercript to mark the exponents (280 atoms in the universe vs 280). ;P
suhasuseless
50%
50%
suhasuseless,
User Rank: Apprentice
7/22/2015 | 11:22:43 AM
good post
cool article..really cool
COVID-19: Latest Security News & Commentary
Dark Reading Staff 11/19/2020
New Proposed DNS Security Features Released
Kelly Jackson Higgins, Executive Editor at Dark Reading,  11/19/2020
How to Identify Cobalt Strike on Your Network
Zohar Buber, Security Analyst,  11/18/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: A GONG is as good as a cyber attack.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-25660
PUBLISHED: 2020-11-23
A flaw was found in the Cephx authentication protocol in versions before 15.2.6 and before 14.2.14, where it does not verify Ceph clients correctly and is then vulnerable to replay attacks in Nautilus. This flaw allows an attacker with access to the Ceph cluster network to authenticate with the Ceph...
CVE-2020-25688
PUBLISHED: 2020-11-23
A flaw was found in rhacm versions before 2.0.5 and before 2.1.0. Two internal service APIs were incorrectly provisioned using a test certificate from the source repository. This would result in all installations using the same certificates. If an attacker could observe network traffic internal to a...
CVE-2020-25696
PUBLISHED: 2020-11-23
A flaw was found in the psql interactive terminal of PostgreSQL in versions before 13.1, before 12.5, before 11.10, before 10.15, before 9.6.20 and before 9.5.24. If an interactive psql session uses \gset when querying a compromised server, the attacker can execute arbitrary code as the operating sy...
CVE-2020-26229
PUBLISHED: 2020-11-23
TYPO3 is an open source PHP based web content management system. In TYPO3 from version 10.4.0, and before version 10.4.10, RSS widgets are susceptible to XML external entity processing. This vulnerability is reasonable, but is theoretical - it was not possible to actually reproduce the vulnerability...
CVE-2020-28984
PUBLISHED: 2020-11-23
prive/formulaires/configurer_preferences.php in SPIP before 3.2.8 does not properly validate the couleur, display, display_navigation, display_outils, imessage, and spip_ecran parameters.