Endpoint

7/21/2015
10:30 AM
Simon Crosby
Simon Crosby
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

Times Running Out For The $76 Billion Detection Industry

The one strategy that can deliver the needle to the security team without the haystack is prevention.

Enterprises spend a mind-boggling $76 billion each year to “protect” themselves from cyber-attacks, but the bad guys keep winning because most protection solutions are based on detection instead of prevention. The 2015 Verizon Data Breach Investigation Report highlighted over 2,100 breaches and the FBI claims that every major U.S. company has been compromised by the Chinese – whether they realized it or not.

What’s wrong? The answer is the same today as it was in ancient Troy when the Greek army suddenly disappeared, leaving behind an innocent-looking horse that the Trojans willingly brought inside the gates. The enemy had changed shape, avoiding detection. And so it is today: Verizon found that 70- to 90 percent of the malware used in successful breaches last year was unique to the attacked organization. Today’s detection-centric tools mistakenly assume that malware, or techniques used in an attack, will be used elsewhere. We read the results in the press and it isn’t pretty.

Detection is a flawed protection strategy
Detection will fail – with certainty. The proof dates back to Turing’s work in 1936 on the Halting Problem and Alonzo Church’s work on undecidable problems, meaning it is impossible to determine if code is malicious or not with 100 percent certainty.

Some security vendors claim to have developed “advanced threat detection” or “new math” but this is deliberately misleading; they are secretly delighted with the status quo. That’s because detection serves their commercial goals to advance a narrative that organizations are pitted against sophisticated foes whose subterfuge demands continued diligence and adaptation. They use this to absolve themselves of responsibility when detection fails, and to bolster the marketing appeal of their “next gen” products. There is “always a way in” and “no silver bullet.” Homilies don’t help.

Absurdly enough, these same vendors debase the language of security, promising to stop breaches, and secure the enterprise – when they cannot. Others, focused on remediation and forensics, sell the equivalent of cyber indulgences to absolve these victims of the sin of poor security practices.

Detection fails in two ways – with unexpected consequences:

  • We all understand the obvious (and inevitable) consequence of failing to detect an actual attack - a “false negative” – that lets the bad guy in. An example is an IPS that cannot see inside encrypted TLS web traffic, given that more than 70 percent of attacks use TLS – as close an analogy to the Trojan Horse as one could want.
  • Another, more prevalent failure mode is just as bad: State of the art IPS systems bury “true positives” in a haystack of (up to 1,000 times as many) false alarms. A recent Ponemon study found security teams investigate only 4 percent of alerts. Security teams scurry about remediating non-attacked systems, losing focus and wasting enormous time and money, and in the fuss may fail to notice signs of an actual attack. Last year’s breach of Target is a good example because they did not respond to the alerts.

Detection is a failed detection strategy (sic)
Building a good detector requires careful tuning with real-world attacks. But in today’s cyber-scape polymorphic and crypted malware changes shape hourly. It is impossible to adapt a detector at the same speed. Stated mathematically:

“[For malware of size n bytes] …The challenge … is to model a space on the order of 28n to catch attacks hidden by polymorphism. To cover 30 byte [malware] decoders requires 2240 potential signatures. For comparison there exist an estimated 280 atoms in the universe.”

Vendors that claim that detection is a tool to find compromised systems to “reduce dwell time” find that their detection tools are as poor at identifying successful attacks as they are at stopping them.

Detection is a failed strategy
The only viable alternative to detection is to make systems “secure by design.” Network micro-segmentation would have easily defeated the Target attack. Micro-virtualization enables endpoints to hardware-isolate each task that processes untrusted content, defeating each attack automatically. An architecture that rigorously enforces the principle of least privilege is widely recognized in the domain of human security – for example in intelligence work, and more widely in society.

The only way to survive in an untrusted world is to enforce least privilege and to never trust the untrustworthy. Hardware isolation transforms security, and server hypervisors and clouds, and micro-virtualized endpoints can both secure themselves and ensure that there is never any need to trust a detector.

As it turns out, in the context of resilient, self-remediating endpoints, it is possible to eliminate false positives, identifying actual attacks with uncanny precision – in other words, to deliver the needle to the security team, without the haystack.

[Read an opposing view favoring detection over prevention by Josh Goldfarb in Detection: A Balanced Approach For Mitigating Risk.]

Simon Crosby is co-founder and CTO at Bromium. He was founder and CTO of XenSource prior to the acquisition of XenSource by Citrix, and then served as CTO of the Virtualization & Management Division at Citrix. Previously, Simon was a principal engineer at Intel where he led ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
KevinF351
50%
50%
KevinF351,
User Rank: Apprentice
8/6/2015 | 8:35:57 AM
Good apart from the advertising at the end
Some great ideas and commentary, shame it ended with such a blatant plug for his companies solution!
RyanSepe
50%
50%
RyanSepe,
User Rank: Ninja
7/31/2015 | 10:53:06 AM
Re: Superscript?
Yes agreed. Slight oversight. More people than atoms doesn't make sense.
B_SeeMore
50%
50%
B_SeeMore,
User Rank: Apprentice
7/23/2015 | 9:59:54 AM
Superscript?
Your intimidating statistic in the "atoms in the universe" quote is slightly less intimidating without the supercript to mark the exponents (280 atoms in the universe vs 280). ;P
suhasuseless
50%
50%
suhasuseless,
User Rank: Apprentice
7/22/2015 | 11:22:43 AM
good post
cool article..really cool
High Stress Levels Impacting CISOs Physically, Mentally
Jai Vijayan, Freelance writer,  2/14/2019
Valentine's Emails Laced with Gandcrab Ransomware
Kelly Sheridan, Staff Editor, Dark Reading,  2/14/2019
Making the Case for a Cybersecurity Moon Shot
Adam Shostack, Consultant, Entrepreneur, Technologist, Game Designer,  2/19/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
5 Emerging Cyber Threats to Watch for in 2019
Online attackers are constantly developing new, innovative ways to break into the enterprise. This Dark Reading Tech Digest gives an in-depth look at five emerging attack trends and exploits your security team should look out for, along with helpful recommendations on how you can prevent your organization from falling victim.
Flash Poll
How Enterprises Are Attacking the Cybersecurity Problem
How Enterprises Are Attacking the Cybersecurity Problem
Data breach fears and the need to comply with regulations such as GDPR are two major drivers increased spending on security products and technologies. But other factors are contributing to the trend as well. Find out more about how enterprises are attacking the cybersecurity problem by reading our report today.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-8980
PUBLISHED: 2019-02-21
A memory leak in the kernel_read_file function in fs/exec.c in the Linux kernel through 4.20.11 allows attackers to cause a denial of service (memory consumption) by triggering vfs_read failures.
CVE-2019-8979
PUBLISHED: 2019-02-21
Koseven through 3.3.9, and Kohana through 3.3.6, has SQL Injection when the order_by() parameter can be controlled.
CVE-2013-7469
PUBLISHED: 2019-02-21
Seafile through 6.2.11 always uses the same Initialization Vector (IV) with Cipher Block Chaining (CBC) Mode to encrypt private data, making it easier to conduct chosen-plaintext attacks or dictionary attacks.
CVE-2018-20146
PUBLISHED: 2019-02-21
An issue was discovered in Liquidware ProfileUnity before 6.8.0 with Liquidware FlexApp before 6.8.0. A local user could obtain administrator rights, as demonstrated by use of PowerShell.
CVE-2019-5727
PUBLISHED: 2019-02-21
Splunk Web in Splunk Enterprise 6.5.x before 6.5.5, 6.4.x before 6.4.9, 6.3.x before 6.3.12, 6.2.x before 6.2.14, 6.1.x before 6.1.14, and 6.0.x before 6.0.15 and Splunk Light before 6.6.0 has Persistent XSS, aka SPL-138827.