Kubernetes is the de facto container management platform in the modern cloud-native world. It makes it possible to develop, deploy, and manage microservices flexibly and scalably. Kubernetes works with various cloud providers, container runtime interfaces, authentication providers, and extensible integration points.
However, Kubernetes still has one major drawback: security. The integrator approach of Kubernetes to run any containerized application on any infrastructure makes it challenging to create holistic security around Kubernetes and the application stack living on it.
According to Red Hat's 2022 "State of Kubernetes" security report, the majority of Kubernetes users had their delivery halted due to unaddressed security concerns. In addition, over the course of the previous 12 months, almost every Kubernetes user in the study experienced at least one security incident. Therefore, it is fair to say that Kubernetes environments are not secure by default and are open to risks.
This article discusses the top 10 security risks with real-life examples and tips on how to avoid them.
1. Kubernetes Secrets
Secrets are one of the core building blocks of Kubernetes for storing sensitive data like passwords, certificates, or tokens and using them inside containers. There are three critical issues related to Kubernetes secrets:
- Secrets store sensitive data as base-64 encoded strings, but they are not encrypted by default. Kubernetes does offer encryption of the resources for storage, but you need to configure it. Furthermore, the biggest threat about secrets is that any pod — and any applications running inside the pod — in the same namespace can access and read them.
- Role-based access control (RBAC) allows you to regulate who is granted access to Kubernetes resources. You need to properly configure RBAC rules so that only the relevant people and applications will have access to secrets.
- Secrets and ConfigMaps are the two methods of passing data to running containers. If there are old and unused secrets or ConfigMap resources, it can create confusion and leak vulnerable data. For instance, if you delete your back-end application deployment but forget to delete the secret that has your database passwords, any malicious pod can use them in the future.
2. Container Images With Vulnerabilities
Kubernetes is a container orchestration platform that distributes and runs containers on the worker nodes. However, it does not check the contents of containers for whether they have security vulnerabilities or exposures.
Therefore, it is necessary to scan the images before deployment to ensure that only images from trusted registries with no critical vulnerabilities (like remote code execution) will run on the cluster. Container image scanning should also be integrated into CI/CD systems for automation and detecting flaws earlier.
3. Runtime Threats
Kubernetes workloads — namely, containers — run on the worker nodes, and the containers are controlled by the host operating system in the runtime. If there are permissive policies or container images with vulnerabilities, they can open backdoors into your whole cluster. Therefore, OS-level runtime protection is required to enforce security in runtime, and the most important protection against runtime threats and vulnerabilities is to implement the least-privileged principle throughout Kubernetes.
Open source and widely accepted tools like Seccomp, SELinux, and AppArmor in the Linux kernel level are available to implement policies and restrict access. These tools are not internal to Kubernetes and require external configuration and effort to enable protection against runtime threats. In order to secure Kubernetes in an automated fashion, try employing the Kubernetes Security Posture Management (KSPM) approach. KSPM leverages automation tools to detect, fix, and alert security, configuration, and compliance issues using a holistic approach.
4. Cluster Misconfiguration and Default Settings
The Kubernetes API and its components consist of a complex set of resource definitions and configuration options. Therefore, Kubernetes offers default values for most of its configuration parameters and tries to remove the burden of creating long YAML files.
However, you need to be careful about three critical issues related to cluster and resource configuration:
- Default Kubernetes configurations are helpful, as they try to increase flexibility and agility, but they are not always the most secure options.
- Online examples for Kubernetes resources are helpful to get started, but it is necessary to double-check what these example resource definitions will deploy to your cluster.
- It is customary to make changes to Kubernetes resources using "kubectl edit" commands while working on the clusters. However, if you forget to update the source files, the changes will be overwritten in the next deployment, and the untracked modifications could lead to unpredictable behavior.
5. Kubernetes RBAC Policies
RBAC is the Kubernetes-native method of managing and controlling authorization to Kubernetes resources. Therefore, configuring and maintaining RBAC policies is essential to secure the clusters from unwanted access.
There are two critical points to consider while working with RBAC policies. First, some RBAC policies are too permissive, such as the cluster_admin role, which can do basically everything in the cluster. These roles are assigned to regular developers to make them more agile. However, in the event of a security breach, attackers quickly get high-level access to the clusters using cluster_admin. To avoid this, you should configure RBAC policies for specific resources and assign them to particular user groups.
Second, in general, various environments, like development, testing, staging, and production, exist in the software development life cycle. In addition, there are multiple teams with different focuses, such as developers, testers, operators, and cloud admins. RBAC policies should be assigned correctly for each group and each environment to limit exposure.
6. Network Access
In Kubernetes, a pod can connect to other pods and external addresses outside the cluster; others can connect to this pod from inside the cluster by default. Network policies are the Kubernetes-native resources to manage and restrict the network access between pods, namespaces, and IP blocks.
Network policies can also work with the labels on the pods, so inefficient use of labels could lead to unwanted access. In addition, when the clusters live in cloud providers, the cluster network should also be isolated from the rest of the virtual private cloud (VPC).
7. Holistic Monitoring and Audit Logging
When you deploy an application to a Kubernetes cluster, it's not sufficient to only monitor the application metrics. You must also watch the Kubernetes cluster's status, cloud infrastructure, and cloud controllers to have a holistic view of the complete stack. It is also important to watch for breaches and detect anomalies since intruders will be testing to access your clusters from every possible opening.
Kubernetes provides out-of-the-box audit logs for security-related incidents in the cluster. Still, you also need to collect the records from various applications and monitor their health in a central place.
8. Kubernetes API
Kubernetes API is the core of the entire system, where all internal and external clients connect and communicate with Kubernetes. If you deploy and manage Kubernetes components in-house, you need to be more careful because the Kubernetes API server and its components are open source tools with potential — and actual — vulnerabilities. Therefore, you should use the latest stable version of Kubernetes and patch live clusters as early as possible.
If you use cloud providers, the Kubernetes control plane is in control of the provider itself, so the cloud infrastructure updates and patches automatically. However, users are responsible for upgrading worker nodes in most cases. Therefore, you can use automation and resource provisioning tools to upgrade nodes easily or replace them with new ones.
9. Kubernetes Resource Requests and Limits
In addition to scheduling and running containers, Kubernetes can also limit the resource usage of containers in terms of CPU and memory. Although Kubernetes users mostly neglect them, resource requests and limits are critical for two reasons:
- Security: When the pods and namespaces are not restricted, even a single container with a security vulnerability could access sensitive data inside your cluster.
- Cost: When the requested resources are more than the actual usage, the nodes will run out of available resources. This will lead to an increase in the node pool if autoscaling is enabled, and the new nodes will increase your cloud bill inevitably.
When the resource requests are calculated and assigned correctly, the whole cluster works efficiently in terms of CPU and memory. In addition, when resource limits are set, both faulty applications and intruders will be limited in terms of resource usage. For instance, if there are no resource limitations, a malicious container could consume nearly all of the resources in the node and make your application unusable.
10. Data and Storage
Although containers are designed to be ephemeral, Kubernetes makes it possible to run stateful containerized applications scalably and reliably. With the StatefulSet resource, you can deploy databases, data analytics tools, and machine-learning applications into Kubernetes quickly. The data will be accessible to the pods as volumes attached to the containers.
However, it is critical to limit access by policies and labels to avoid unwanted access by other pods in the cluster. In addition, storage in Kubernetes is provided by external systems, so you should consider using encryption for critical data in the cluster. If you manage your storage plugins, you should also check the security parameters to ensure that they are enabled.
Kubernetes is the indisputable container management platform for running microservice applications. However, holistic security is still one of its drawbacks, as it is not the core focus of the project. Therefore, you need to take extra steps to make your clusters and applications more secure.