Today’s companies are experiencing an architectural revolution involving the security of their increasingly complex data center environments. One of the newest developments for cloud architectures is the concept of micro-segmentation. First-generation users are leveraging micro-segmentation architectures to deploy security policies across data center networks — separating diverse workloads, collapsing legacy physical zones, and reducing attack surfaces.
In the early phases of this architectural transformation, it’s important to understand where security fits into the micro-segmentation equation and the various options available. Some approaches are legacy-based (e.g. agents and virtualized appliances), while others apply new methods, such as SDN-based overlays and distributed systems architectures.
While all of these approaches have their own merits, the critical issue for decision-makers is whether the approach they choose will address their needs for the short term and in the long run. The two main factors in this decision are scalability and security capability.
Tipping the scales
The dynamic data center is all about efficiency and scale, but scalability is where virtual firewall solutions reveal their limitations. These types of solutions require organizations to direct traffic into a local appliance and restrict the amount of devices within a single cluster. While they do allow for zone and workgroup based separation, they struggle to deploy beyond coarsely grained segmentation or to support unrestricted movement of workloads within a cluster.
Other approaches, such as agents, are designed to scale out beyond data center clusters, though the challenges presented by the sheer number of agents requiring synchronization — which can amount to an order of magnitude of the number of vSwitches or distributed sensors — should be carefully assessed.
Protection at its peak
All micro-segmentation architectures are not created equal, and the variation in security capability has a considerable impact on the data center’s ability to enforce protection and truly understand threat context. Not every company requires full application control within their data center. It is, however, important to be aware of the differences in security capabilities for different approaches in order to mitigate future threats and meet new and changing regulatory requirements.
Network overlay and agent solutions offer some degree of protection, but generally it’s incomplete. This renders basic abilities, such as conducting rudimentary processing of specific applications to set up certain flow entries correctly, cumbersome. An alternative is a solution based on IPtables or Conntrack, which also have limited security-processing capabilities.
“Network security-rooted” products, such as firewall appliances and distributed security systems, provide a higher level of control—offering management capabilities and the option to understand application context through App-ID engines. Application-level processing is tremendously important for today’s data centers as many applications use HTTP connections, and it’s essential to understand the application executing across the simplified protocol transport. Security propositions offering this richer level of processing capability can also understand behavior at a more granular level, including the ability to access file operations and analyze DNS usage — all of which are important indicators of potential malicious behavior.
Attacks from within
Security controls can either be deployed from outside the infrastructure being protected (or potentially used to launch attacks), or within the same "trust boundary" (or operating system) as that infrastructure. As we have seen with numerous compromises, once an attacker gains control of an asset, it is common practice to disable its security controls.
Agent-based technologies are exposed to this weakness, and can further affect security posture by complicating workloads. In contrast, overlays, security appliances, and distributed systems are all designed to be implemented outside the trust boundary of data center assets, and are therefore not exposed to this weakness.
As we are just at the beginning of this industry overhaul, another consideration is whether the technology selected to address these data center challenges will still be applicable in two- to three years. The good news is that your chosen architecture should be able to accommodate likely scope changes, including the introduction of application containers as a common unit of processing resource, the requirement to deploy enhanced security control capabilities (for example, dynamic synthetic attack surface or DPI based controls) scaled out across the data center, any changes to the network architecture, and migrations to hybrid and public cloud architectures.
As we enter this new era in which there are increasing numbers of options and combinations of architectures and security controls, it is important for decision-makers to keep in mind how these solutions work together, especially when delving into the relatively new area of micro-segmentation. Though there is no magic combination for airtight security – and every company has different needs – a careful consideration of the options is necessary for companies to protect their most important asset: their data.Marc has over 30 years of experience in mission critical infrastructure and software-defined networks. Marc joined vArmour as CTO in February, 2015. Prior to this role, he was a Technology Fellow and CTO for Networking and Telecommunications at Goldman Sachs. He served as a ... View Full Bio