Why Kubernetes Clusters Are Intrinsically Insecure (& What to Do About Them)
By following best practices and prioritizing critical issues, you can reduce the chances of a security breach and constrain the blast radius of an attempted attack. Here's how.
Teams new to Kubernetes often deploy clusters in an insecure way by default because they don't know what they don't know. Unless you've got a team of battle-hardened Kubernetes experts, you're bound to run into trouble. For example, it's not always obvious when a Kubernetes deployment is overpermissioned, and often the easiest way to get something working is to give it root access or cluster-admin permissions.
But just because the site is working doesn't mean your job is done. If you haven't tightened your security posture adhering to best practices, it's only a matter of time before you start learning lessons the hard way, whether that's through a denial-of-service (DoS) attack or something more severe. The good news is Kubernetes comes with built-in security tooling and a growing ecosystem of both open source and commercial solutions. You can build a security strategy that enables rapid development while maintaining a strong security posture.
Stay Up-to-Date
The Kubernetes ecosystem is constantly evolving. But that doesn't just mean new features. It means bugs are squashed and security holes are patched every day. It's critical to stay up-to-date at every level of the stack. Once a vulnerability is announced, it's only a matter of time before someone designs an exploit. There are three kinds of updates to focus on:
1. Kubernetes itself. There's a new minor version of Kubernetes every quarter, and patch releases come out on a semiregular basis. Make sure you're monitoring CVE feeds for any new issues. Even better, many managed Kubernetes providers like GKE offer the ability to automatically upgrade your nodes, which helps ensure you're not running any known vulnerabilities.
2. Add-ons. You've likely installed some third-party tools such as nginx-ingress or cert-manager, which come with their own vulnerabilities and release cadence. Routinely check these for updates, or use a tool like Nova to monitor them for you.
3. Container Image Vulns. Each of the individual container images running in your cluster can have vulnerable software installed. You can run a container scanning solution like Trivy to help catch issues, but the best way to stay ahead is to ensure you're always on the latest version of the image. For images that are published with semver, you should at least unpin the patch version, so you're always pulling in any bug fixes.
Limiting the Blast Radius.
Kubernetes can't secure application code; there's nothing to stop developers from introducing bugs that might grant an attacker access to the host machine or an internal API. However, Kubernetes can place strong limits on the blast radius of an attack. When implemented, security teams can start to look at Kubernetes as an opportunity to improve security rather than as a threat.
Role-based access control (RBAC) is the first line of defense. It decides which Kubernetes resources a particular container should have access to and what it should be allowed to do with them. Some workloads might need to view application logs, while others need cluster-admin permission to create and delete other workloads. When creating RBAC profiles, make sure to adhere to the principle of least privilege.
Security teams must also pay close attention to the deployment configuration attached to each of your workloads. If a container is running as root, has access to the host's filesystem, or has some other security flaw, the attack can quickly spread throughout the cluster, compromising every workload. Tools like Polaris can help automate these checks.
Finally, look into additional lockdown mechanisms like Network Policy (which limits traffic in and out of particular pods) or Workload Identity (which ties RBAC to your cloud provider's authentication mechanism, such as IAM on Google Cloud or AWS).
Limiting Network Traffic
A DoS attack is one of the easiest to implement. An attacker simply has to clog your servers with traffic, preventing legitimate traffic from getting in. A genius of Kubernetes is its ability to autoscale to meet traffic increases, but scaling up is costly, and it doesn't happen instantaneously. Without the right limits in place, a DoS attack is sure to create some pain for your Ops team. The right ingress policy — including per-IP limits on the number of concurrent connections, on requests per second/minute/hour, and on the size of request bodies — will give a layer of protection against this type of threat and the cost associated. Depending on your ingress provider, these limits can also be configured per-application, and even per-IP or per-path, giving you flexibility to continue allowing certain users or certain endpoints to scale up quickly.
Maintaining Cluster Security
Even though your cluster will inevitably contain a few hidden vulnerabilities, by following the guidelines above you can strongly mitigate a successful attack and the magnitude of the fallout. As with any software environment, you should treat each workload as compromised, and work to contain it. But once you've built your cluster and tightened its security, how do you keep it that way? Developers are constantly shipping new code and configuration — how can you be sure it's in-line with your policies? Furthermore, how do you enforce those policies across many clusters and lines of business?
Look to a partner or platform to continuously monitor your cluster and enforce security policies. Without the right help, securing Kubernetes clusters can be a manual, time-consuming, and error-prone activity. But with the right help, you can deploy rapidly and without fear.
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024