Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Cloud

6/7/2016
11:00 AM
Doug Gourlay
Doug Gourlay
Commentary
Connect Directly
Facebook
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

Microsegmentation & The Need For An Intelligent Attack Surface

There is a fundamental difference in the security posture and technology for protecting the White House versus a Social Security office in California. So, too, for the critical apps and systems that are likely targets in your enterprise.

There has been a lot of discussion about the term "attack surface" over the last few years as microsegmentation becomes a more commonly understood technology and capability. The problem with most microsegmentation solutions is that they have optimized “on reach” at the expense of data and actionable intelligence-gathering.

Optimizing "On Reach"

There is a set of microsegmentation technologies available and being considered for usage today that optimize on the lowest-common denominator of security. These technologies offer a relatively simple security model applied to as many form-factors and variations of applications as possible: containers, VMs, on-premises, cloud, bare metal, and network device. In optimizing on as many possible computing platforms as possible there are a set of tradeoffs that are made versus the depth of policies necessary for the top tier of application: those that provide control point services supporting the entire enterprise.

Microsegmentation systems optimize on reach and attempt to provide a baseline level of security across as many disparate systems as possible. This includes workloads that currently reside on everything from bare metal servers and mainframes, to virtual machines, containers, cloud providers, and firewalls. The larger the number of device types that can be supported by a vendor, the more broadly the policies can be applied to a given enterprise.

To achieve this breadth, the depth of security policies applied to a given workload is where the tradeoff comes in. Most policies in today’s microsegmentation systems reside primarily at Layer 2 for admission control, Layer 3 for controlling flow establishment, and Layer 4 for protocol selection.

For the last few major breaches, there has been no clear demonstration of how these technologies would have made a difference. The reason? Because the majority of the unpatched vulnerabilities that are being exploited happen at the application layer within well-known ports and on protocols that are authorized between two hosts that are required to communicate with each other.

Optimizing "On Depth"

Designing a system to optimize on deep rich policies applied to individual workloads creates a different set of challenges. You have to understand the specific security requirements and the attack surface of a given application (which can, of course, encompass more than one VM/server/storage system).

The main difference is that -- as opposed to "shrinking" the attack surface -- this path focuses on replacing the attack surface altogether. This model creates an intelligent wrapper in which you encapsulate the workload you are protecting.

There are several key capabilities required to encapsulate a workload:

1) Control Administrative Access:  The worst enemies we have in security are users and admins. Commonplace mistakes such as opening the wrong email have given attackers access behind the network-based policy enforcement points. Admin access controls manifest in full SSH, KVM, and RDP proxies that controls management access to the workload.

2) Control Transport Protocols: HTTP and HTTPS are the primary transport protocols for most applications today. TLS 1.2 is the current ratified standard. The intelligent attack surface will ensure only the current version of TLS with only the strong cipher suites is supported on external connections, even if the workload only supports SSLv2/v3 or TLS 1.0 as is quite common in older .NET and Java frameworks.

3) Control Authentication:  One of the most critical functions of an application is to verify the integrity of who or what is talking to it, and that generally happens with systems such as RADIUS, TACACS, LDAP, or MS Active Directory. To protect against an advanced threat, it’s of paramount importance to be able to ensure that the communication with the authentication source is done via a high integrity connection using strong encryption capabilities and can’t be forced to fall back onto weaker methods (NTLMv1/v2, for instance).

4) Control Storage Access:  Data is generally stolen from the point of processing as we saw in the Target breach, or from a core repository as we saw in the Sony emails. Providing controls from the point of application processing to the storage target is key in reinforcing the policies. Given the importance of the data, it is critical to provide a defense-in-depth approach here; policies for access should reside on the system accessing the storage and on the storage itself. A failure of policy or error in policy definition on one side should not precipitate an insecure state.

5) Control Operations: Think about the scope of administration that is enabled by gaining access to a virtualization controller like VMware vCenter, an SDN Controller, software delivery tools like Jenkins, or backup/recovery stores. There is an incredible amount of attack surface and an administrative universe that is often not given effective consideration when planning a cyber-defense and zoning strategy.

Building Intelligent Sensing into the Abstracted Attack Surface

Being able to gather and sense data about a potential attack is a capability that must be delivered by replacing an ACL-based perimeter with one that is aware of the applications and gathers useful information about each transaction. For instance, in a remote management use case:

  • Source Machine Name
  • Username for the credential being passed
  • Validity of the credential -- is this a stolen credential or a user violating policy?
  • Group membership for the Username - was it an admin credential?
  • Two-Factor Authentication status
  • Last Username to login to the host the session was sourced from?

With this set of information you can now make an informed decision about a few critical questions:

  • Was this a valid request from an admin who was violating policy by not using a secure admin workstation?
  • Was this a case of a stolen credential being used and the admin account needs to be quarantined and re-credentialed?
  • If the session is properly established what actions were taken during this administrative session - what files were transferred?

Your main goal is to use the application-layer proxies as a method of sensing data about the real-time operations of your cyber-defense strategy. You can then make sense of this data to take action to preserve the integrity of the organization. Here’s how:

For Tier 0 applications throughout your enterprise, take a look at your zoning and supporting policies. Ensure specific protections are applied for Tier 0 that do not exist broadly across your enterprise, and deploy different technologies and processes. You’ll also need the capability to sense critical data types and, with analytics tools, make sense of that data to drive decisions and changes in your operating policies.

There has been a lot of talk about reducing the attack surface, and for some Tier 1 and 2 applications and user-to-server access, that may be appropriate. But for Tier 0, such as your command and control infrastructure or your systems of record, consider removing the attack surface altogether and placing an abstraction layer around the application that provides the actionable intelligence your Infosec team needs to protect your operation.

Related Content:

Doug Gourlay is responsible for all customer-facing business at Skyport System and is immersed in the intricacies of securing today's infrastructure against present and future threats. He is an industry veteran with a track record of success spanning 12 years at Cisco Systems ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...