Ever since they burst onto the scene, containers have been legitimately hailed as a very efficient means to deploy applications onto servers. Containers, such as those based on the Docker open-source standards, consume fewer resources than virtual machines, and containers are easier to design and faster to instantiate and provision.
Unlike VMs, however, containers aren't 100% isolated from the underlying host operating system, which is most commonly Linux or Window Server, or from drivers or other applications on the server.
Consider VMs, which are an older technology.
A VM is a complete virtualized server that's assigned disk space, processor cycles, and I/O resources by software called a hypervisor. Within the VM are everything you'd find on real server: An operating system, device drivers, applications, configuration files and network connections.
In other words, you've got a stack that -- from the bottom up -- is the bare metal, the server's host operating system, the hypervisor, and then one or more virtual machines, each with its own operating system, drivers and applications.
By contrast, everything in a container shares the underlying host operating system, device drivers and some configuration files. Instead of a hypervisor, there's a Docker Daemon -- if you’re using Docker, the most popular containerization system -- which provisions one or more containers. Each container only holds applications. Those applications rely upon the host operating system and drivers, which it also shares with other containers running on the same server.
The container benefit: lighter weight
If you have 20 Linux virtual machines on a Linux server, you're using memory and CPU resources to run 21 instances of Linux -- 20 for the VMs, and one for the host. It takes time to start up all those Linux instances, and you're wasting a lot of resources on overhead.
On the other hand, all those Linux VMs are isolated from each other -- in fact, they could even be different versions of Linux. In the VM model, that’s totally fine.
If you have 20 containers on a Linux server, by contrast, you only have one copy of Linux running. Startup up a container is very fast, and consumes far fewer resources. There's only one Linux kernel, and one set of shared libraries.
However, it is possible for security problems in one container to leak out and affect other containers or their applications.
The VM benefit: stronger isolation
To get geeky for a minute: Technology in modern microprocessors, host operating systems (Linux and Windows), and hypervisors (VMware ESX, Citrix XenServer, and Microsoft Hyper-V) provide hardware-based isolation between each virtual machine. That protection is in concentric rings: Each ring is protected from a higher-numbered ring, with Ring 0 in the center, walled off from applications.
In a virtual machine system, the host operating system's kernel runs in Protection Ring 0 -- which means nothing can get to it. The hypervisor runs in Ring 1. Individual virtual machines run in Ring 2 -- and thus, can't get to the hypervisor inside Ring 1, or to the operation system either.
What's more, the hypervisor can use its Ring 1 privileges to enforce rules preventing one VM from accessing another VM's memory, applications or resources.
Things aren't equally secure in the container world, since the Docker Daemon isn't a Ring 1 hypervisor, but rather, is simply a Ring 2 application. There's nothing in the hardware, therefore, that can completely block one container from making changes to the underlying server, or from accessing other containers’ memory, storage, or settings. There are software protections, yes, but they’re not impenetrable.
How to secure containers
The security in a container-based server should be considered -- in my opinion -- as appropriate for "friends and family": You should know and trust all the applications running in containers on that server.
Yet you don't need to know or trust the applications running in other virtual machines on the server, which is why cloud hosting companies use VMs, not containers, to isolate customers' software and data.
There are ways to harden containers to make them less vulnerable, but they come down to a few common approaches. First, tighten the attack surface of the containerized software so that if it is attacked, there's minimal danger of data leakage.
Another is strict control access to containers, and if necessary, isolate particularly sensitive containers on their own servers.
Be sure to research the container system that you're using, and the underlying host operating system. For example, those running containers on Red Hat Linux should look at the company’s "Ten Layers of Container Security " document. Other must-reads are Docker's "Introduction to Container Security " and Microsoft's "Securing Docker Containers in Azure Container Service."
Containers are the fastest, most efficient way to deploy applications into the cloud -- and are much more resource-efficient than virtual machines. The trade-off is that containers aren't as secure as virtual machines. Use containers with that in mind, and you'll be fine.
- CenturyLink's Adaptive Network Security Mobility Looks to Secure Public WiFi
- ICS Network Managers: Time for a Wake-Up Call
- 5 New Network Attack Techniques That Will Keep You Awake at Night
- Firewall Fail: IT Can't Identify All Network Traffic
— Alan Zeichick is principal analyst at Camden Associates, a technology consultancy in Phoenix, Arizona, specializing in enterprise networking, cybersecurity, and software development. Follow him @zeichick.