Securing Containers with Zero Trust

A software identity-based approach should become a standard security measure for protecting workloads in all enterprise networks.

Peter Smith, Founder & Chief Executive Officer, Edgewise Networks

January 29, 2020

5 Min Read
Dark Reading logo in a gray background | Dark Reading

Containers have many benefits: easy portability, fewer system requirements, and increased efficiency, just for starters. But these benefits come at a cost. To provide these benefits, containers rely on extremely complex networking, much of it opaque, with ephemeral and constantly changing network addresses. As a result, it's a huge challenge to secure containers via technologies that rely on trusted IP addresses such as firewalls or traditional microsegmentation.

Let's take a look at how containers manage networking. When Docker classic was introduced, Docker needed a low-friction way to introduce containers, so it used network address translation (NAT), which modifies network address information in the IP header during transit in order to remap the address space. It simplifies management for IT by hiding the network complexity behind the host machine, but it also makes the nuts and bolts of its networking opaque. Containers can have different IP addresses than their hosts, not even residing in the same subnet.

Another method, called bridging, is more transparent. In this method, everything acts as if it has an IP address in the same network — even though some things are hosts, others are containers, and containers may be moving between hosts — but the underlying network complexity is visible to IT.

In addition, many containers use overlay networks. This creates a distributed network that sits on top of host-specific networks, which enables containers to communicate easily, as if they were right next to one another, while the infrastructure moves them around to different hosts. It's similar to what VMware NSX did for virtualization infrastructure.

The key takeaway is that container networking is very pluggable and customizable, but its variability and complexity make applying firewall policy based on network addresses very hard to do.

Firewalls
IT is no longer static, as it was in the 1980s and 1990s. Containers are placed dynamically and automatically by the infrastructure, and if the load changes or a host crashes, the infrastructure will place that container somewhere else. IT won't know what address to use for a firewall rule until the infrastructure places it somewhere.

For network-address based firewall policies to work, they need to be automatically computed in real time, and that's extremely complex. We're nowhere near being able to do this. Infrastructure changes occur in milliseconds, while policies can take hours to change, and that means firewall policies will always fall behind. IT is forced to create overly permissive security policies to deal with the rapidly changing nature of network addresses within containers.

Lateral Movement and the Complexity of Network Security Policies
Let's say a cybercriminal has exploited a host's secure shell daemon and wants to access a SQL database. From the perspective of a firewall, all it would see is a packet coming from that host, a machine it has been told to trust. It will allow that packet, which in turn allows attackers to exfiltrate data, encrypt the data, or use SQL itself to move further across the network toward their target.

Now let's add a second container to the host. In a Docker classic environment, all the containers are network-address translated to look like the host, so it's impossible to determine where the traffic originated. In a bridging scenario, there are multiple ways to impersonate the Java microservice inside the container. And just as with other network plug-ins, the Linux machine serving as the host has a large network attack surface. There are system calls, admin tools, special purpose file systems, special purpose network protocols that communicate with the kernel itself — any of these can be compromised to allow activity that the firewall policy never intended to allow.

If the purpose of a policy is to only allow this specific Java microservice to communicate with a SQL database, in a firewall model, this all has to be transformed into a long series of network addresses, which have to change on the fly as the network infrastructure itself changes. But what if, instead of translating these workloads into addresses, we create policies based on the identities of the workloads themselves?

In this approach, each workload is assigned an immutable, unique identity based on dozens of properties of the software, host, or device itself, such as a SHA-256 hash of a binary, the UUID of the bios, or a cryptographic hash of a script. In this way, we can not only separate our policies from the constantly changing network layer but also ensure secure end-to-end connectivity because we'll know exactly what is communicating on both sides. Even better, because the identity is based on intrinsic attributes, this method prevents spoofed or altered software, devices, and hosts from communicating.

Through the use of identity, we can also go beyond firewalls, which have been designed to protect the perimeter of a network, to enable a zero-trust environment. (Editor's note: The author's company uses a zero-trust approach to microsegmentation.) In this model, all network traffic is treated as hostile, and only authorized hosts, devices, or software are allowed to communicate with specific workloads. If a software or service inside a container is compromised, firewalls won't prevent it from moving laterally across the network to do further harm because they depend on network addresses which are ephemeral and rapidly changing in container environments. In a zero-trust environment that's based on identity, we can prevent compromised workloads from communicating because their identities will no longer be recognized.

Through the use of identity-based policies, security teams can finally secure autoscaling environments such as containers and stop threats from laterally moving from one host to another. A software identity-based approach (such as zero trust) should become a standard security measure for protecting workloads in all enterprise networks, whether on-premises, in the cloud, or in containers.

Related Content:

 

About the Author

Peter Smith

Founder & Chief Executive Officer, Edgewise Networks

Peter Smith, Edgewise Founder and CEO, is a serial entrepreneur who built and deployed Harvard University's first NAC system before it became a security category. Peter brings a security practitioner's perspective to Edgewise with more than ten years of expertise as an infrastructure and security architect of data centers and customer-hosting environments for Harvard University, Endeca Technologies (Oracle), American Express, Fidelity UK, Bank of America and Nike. Most recently, Peter was on the founding team at Infinio Systems where he led product and technology strategy.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights