Microsegmentation & The Need For An Intelligent Attack Surface
There is a fundamental difference in the security posture and technology for protecting the White House versus a Social Security office in California. So, too, for the critical apps and systems that are likely targets in your enterprise.
There has been a lot of discussion about the term "attack surface" over the last few years as microsegmentation becomes a more commonly understood technology and capability. The problem with most microsegmentation solutions is that they have optimized “on reach” at the expense of data and actionable intelligence-gathering.
Optimizing "On Reach"
There is a set of microsegmentation technologies available and being considered for usage today that optimize on the lowest-common denominator of security. These technologies offer a relatively simple security model applied to as many form-factors and variations of applications as possible: containers, VMs, on-premises, cloud, bare metal, and network device. In optimizing on as many possible computing platforms as possible there are a set of tradeoffs that are made versus the depth of policies necessary for the top tier of application: those that provide control point services supporting the entire enterprise.
Microsegmentation systems optimize on reach and attempt to provide a baseline level of security across as many disparate systems as possible. This includes workloads that currently reside on everything from bare metal servers and mainframes, to virtual machines, containers, cloud providers, and firewalls. The larger the number of device types that can be supported by a vendor, the more broadly the policies can be applied to a given enterprise.
To achieve this breadth, the depth of security policies applied to a given workload is where the tradeoff comes in. Most policies in today’s microsegmentation systems reside primarily at Layer 2 for admission control, Layer 3 for controlling flow establishment, and Layer 4 for protocol selection.
For the last few major breaches, there has been no clear demonstration of how these technologies would have made a difference. The reason? Because the majority of the unpatched vulnerabilities that are being exploited happen at the application layer within well-known ports and on protocols that are authorized between two hosts that are required to communicate with each other.
Optimizing "On Depth"
Designing a system to optimize on deep rich policies applied to individual workloads creates a different set of challenges. You have to understand the specific security requirements and the attack surface of a given application (which can, of course, encompass more than one VM/server/storage system).
The main difference is that -- as opposed to "shrinking" the attack surface -- this path focuses on replacing the attack surface altogether. This model creates an intelligent wrapper in which you encapsulate the workload you are protecting.
There are several key capabilities required to encapsulate a workload:
1) Control Administrative Access: The worst enemies we have in security are users and admins. Commonplace mistakes such as opening the wrong email have given attackers access behind the network-based policy enforcement points. Admin access controls manifest in full SSH, KVM, and RDP proxies that controls management access to the workload.
2) Control Transport Protocols: HTTP and HTTPS are the primary transport protocols for most applications today. TLS 1.2 is the current ratified standard. The intelligent attack surface will ensure only the current version of TLS with only the strong cipher suites is supported on external connections, even if the workload only supports SSLv2/v3 or TLS 1.0 as is quite common in older .NET and Java frameworks.
3) Control Authentication: One of the most critical functions of an application is to verify the integrity of who or what is talking to it, and that generally happens with systems such as RADIUS, TACACS, LDAP, or MS Active Directory. To protect against an advanced threat, it’s of paramount importance to be able to ensure that the communication with the authentication source is done via a high integrity connection using strong encryption capabilities and can’t be forced to fall back onto weaker methods (NTLMv1/v2, for instance).
4) Control Storage Access: Data is generally stolen from the point of processing as we saw in the Target breach, or from a core repository as we saw in the Sony emails. Providing controls from the point of application processing to the storage target is key in reinforcing the policies. Given the importance of the data, it is critical to provide a defense-in-depth approach here; policies for access should reside on the system accessing the storage and on the storage itself. A failure of policy or error in policy definition on one side should not precipitate an insecure state.
5) Control Operations: Think about the scope of administration that is enabled by gaining access to a virtualization controller like VMware vCenter, an SDN Controller, software delivery tools like Jenkins, or backup/recovery stores. There is an incredible amount of attack surface and an administrative universe that is often not given effective consideration when planning a cyber-defense and zoning strategy.
Building Intelligent Sensing into the Abstracted Attack Surface
Being able to gather and sense data about a potential attack is a capability that must be delivered by replacing an ACL-based perimeter with one that is aware of the applications and gathers useful information about each transaction. For instance, in a remote management use case:
Source Machine Name
Username for the credential being passed
Validity of the credential -- is this a stolen credential or a user violating policy?
Group membership for the Username - was it an admin credential?
Two-Factor Authentication status
Last Username to login to the host the session was sourced from?
With this set of information you can now make an informed decision about a few critical questions:
Was this a valid request from an admin who was violating policy by not using a secure admin workstation?
Was this a case of a stolen credential being used and the admin account needs to be quarantined and re-credentialed?
If the session is properly established what actions were taken during this administrative session - what files were transferred?
Your main goal is to use the application-layer proxies as a method of sensing data about the real-time operations of your cyber-defense strategy. You can then make sense of this data to take action to preserve the integrity of the organization. Here’s how:
For Tier 0 applications throughout your enterprise, take a look at your zoning and supporting policies. Ensure specific protections are applied for Tier 0 that do not exist broadly across your enterprise, and deploy different technologies and processes. You’ll also need the capability to sense critical data types and, with analytics tools, make sense of that data to drive decisions and changes in your operating policies.
There has been a lot of talk about reducing the attack surface, and for some Tier 1 and 2 applications and user-to-server access, that may be appropriate. But for Tier 0, such as your command and control infrastructure or your systems of record, consider removing the attack surface altogether and placing an abstraction layer around the application that provides the actionable intelligence your Infosec team needs to protect your operation.
Related Content:
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024