Cloud

6/7/2016
11:00 AM
Doug Gourlay
Doug Gourlay
Commentary
Connect Directly
Facebook
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

Microsegmentation & The Need For An Intelligent Attack Surface

There is a fundamental difference in the security posture and technology for protecting the White House versus a Social Security office in California. So, too, for the critical apps and systems that are likely targets in your enterprise.

There has been a lot of discussion about the term "attack surface" over the last few years as microsegmentation becomes a more commonly understood technology and capability. The problem with most microsegmentation solutions is that they have optimized “on reach” at the expense of data and actionable intelligence-gathering.

Optimizing "On Reach"

There is a set of microsegmentation technologies available and being considered for usage today that optimize on the lowest-common denominator of security. These technologies offer a relatively simple security model applied to as many form-factors and variations of applications as possible: containers, VMs, on-premises, cloud, bare metal, and network device. In optimizing on as many possible computing platforms as possible there are a set of tradeoffs that are made versus the depth of policies necessary for the top tier of application: those that provide control point services supporting the entire enterprise.

Microsegmentation systems optimize on reach and attempt to provide a baseline level of security across as many disparate systems as possible. This includes workloads that currently reside on everything from bare metal servers and mainframes, to virtual machines, containers, cloud providers, and firewalls. The larger the number of device types that can be supported by a vendor, the more broadly the policies can be applied to a given enterprise.

To achieve this breadth, the depth of security policies applied to a given workload is where the tradeoff comes in. Most policies in today’s microsegmentation systems reside primarily at Layer 2 for admission control, Layer 3 for controlling flow establishment, and Layer 4 for protocol selection.

For the last few major breaches, there has been no clear demonstration of how these technologies would have made a difference. The reason? Because the majority of the unpatched vulnerabilities that are being exploited happen at the application layer within well-known ports and on protocols that are authorized between two hosts that are required to communicate with each other.

Optimizing "On Depth"

Designing a system to optimize on deep rich policies applied to individual workloads creates a different set of challenges. You have to understand the specific security requirements and the attack surface of a given application (which can, of course, encompass more than one VM/server/storage system).

The main difference is that -- as opposed to "shrinking" the attack surface -- this path focuses on replacing the attack surface altogether. This model creates an intelligent wrapper in which you encapsulate the workload you are protecting.

There are several key capabilities required to encapsulate a workload:

1) Control Administrative Access:  The worst enemies we have in security are users and admins. Commonplace mistakes such as opening the wrong email have given attackers access behind the network-based policy enforcement points. Admin access controls manifest in full SSH, KVM, and RDP proxies that controls management access to the workload.

2) Control Transport Protocols: HTTP and HTTPS are the primary transport protocols for most applications today. TLS 1.2 is the current ratified standard. The intelligent attack surface will ensure only the current version of TLS with only the strong cipher suites is supported on external connections, even if the workload only supports SSLv2/v3 or TLS 1.0 as is quite common in older .NET and Java frameworks.

3) Control Authentication:  One of the most critical functions of an application is to verify the integrity of who or what is talking to it, and that generally happens with systems such as RADIUS, TACACS, LDAP, or MS Active Directory. To protect against an advanced threat, it’s of paramount importance to be able to ensure that the communication with the authentication source is done via a high integrity connection using strong encryption capabilities and can’t be forced to fall back onto weaker methods (NTLMv1/v2, for instance).

4) Control Storage Access:  Data is generally stolen from the point of processing as we saw in the Target breach, or from a core repository as we saw in the Sony emails. Providing controls from the point of application processing to the storage target is key in reinforcing the policies. Given the importance of the data, it is critical to provide a defense-in-depth approach here; policies for access should reside on the system accessing the storage and on the storage itself. A failure of policy or error in policy definition on one side should not precipitate an insecure state.

5) Control Operations: Think about the scope of administration that is enabled by gaining access to a virtualization controller like VMware vCenter, an SDN Controller, software delivery tools like Jenkins, or backup/recovery stores. There is an incredible amount of attack surface and an administrative universe that is often not given effective consideration when planning a cyber-defense and zoning strategy.

Building Intelligent Sensing into the Abstracted Attack Surface

Being able to gather and sense data about a potential attack is a capability that must be delivered by replacing an ACL-based perimeter with one that is aware of the applications and gathers useful information about each transaction. For instance, in a remote management use case:

  • Source Machine Name
  • Username for the credential being passed
  • Validity of the credential -- is this a stolen credential or a user violating policy?
  • Group membership for the Username - was it an admin credential?
  • Two-Factor Authentication status
  • Last Username to login to the host the session was sourced from?

With this set of information you can now make an informed decision about a few critical questions:

  • Was this a valid request from an admin who was violating policy by not using a secure admin workstation?
  • Was this a case of a stolen credential being used and the admin account needs to be quarantined and re-credentialed?
  • If the session is properly established what actions were taken during this administrative session - what files were transferred?

Your main goal is to use the application-layer proxies as a method of sensing data about the real-time operations of your cyber-defense strategy. You can then make sense of this data to take action to preserve the integrity of the organization. Here’s how:

For Tier 0 applications throughout your enterprise, take a look at your zoning and supporting policies. Ensure specific protections are applied for Tier 0 that do not exist broadly across your enterprise, and deploy different technologies and processes. You’ll also need the capability to sense critical data types and, with analytics tools, make sense of that data to drive decisions and changes in your operating policies.

There has been a lot of talk about reducing the attack surface, and for some Tier 1 and 2 applications and user-to-server access, that may be appropriate. But for Tier 0, such as your command and control infrastructure or your systems of record, consider removing the attack surface altogether and placing an abstraction layer around the application that provides the actionable intelligence your Infosec team needs to protect your operation.

Related Content:

Doug Gourlay is responsible for all customer-facing business at Skyport System and is immersed in the intricacies of securing today's infrastructure against present and future threats. He is an industry veteran with a track record of success spanning 12 years at Cisco Systems ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
6 Security Trends for 2018/2019
Curtis Franklin Jr., Senior Editor at Dark Reading,  10/15/2018
6 Reasons Why Employees Violate Security Policies
Ericka Chickowski, Contributing Writer, Dark Reading,  10/16/2018
Getting Up to Speed with "Always-On SSL"
Tim Callan, Senior Fellow, Comodo CA,  10/18/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Latest Comment: Too funny!
Current Issue
Flash Poll
The Risk Management Struggle
The Risk Management Struggle
The majority of organizations are struggling to implement a risk-based approach to security even though risk reduction has become the primary metric for measuring the effectiveness of enterprise security strategies. Read the report and get more details today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-10839
PUBLISHED: 2018-10-16
Qemu emulator <= 3.0.0 built with the NE2000 NIC emulation support is vulnerable to an integer overflow, which could lead to buffer overflow issue. It could occur when receiving packets over the network. A user inside guest could use this flaw to crash the Qemu process resulting in DoS.
CVE-2018-13399
PUBLISHED: 2018-10-16
The Microsoft Windows Installer for Atlassian Fisheye and Crucible before version 4.6.1 allows local attackers to escalate privileges because of weak permissions on the installation directory.
CVE-2018-18381
PUBLISHED: 2018-10-16
Z-BlogPHP 1.5.2.1935 (Zero) has a stored XSS Vulnerability in zb_system/function/c_system_admin.php via the Content-Type header during the uploading of image attachments.
CVE-2018-18382
PUBLISHED: 2018-10-16
Advanced HRM 1.6 allows Remote Code Execution via PHP code in a .php file to the user/update-user-avatar URI, which can be accessed through an "Update Profile" "Change Picture" (aka user/edit-profile) action.
CVE-2018-18374
PUBLISHED: 2018-10-16
XSS exists in the MetInfo 6.1.2 admin/index.php page via the anyid parameter.