Over the past decade, the acceleration and automation capabilities provided by cloud adoption has brought significant business advantage. In part one of our series, "Security Policy Management in the Cloud," I addressed the requirements for centralizing security policy management across hybrid cloud environments. This article focuses on the requirements for security policy management for Kubernetes.
Today, the accelerating rate of technology improvements available to the enterprise is relentless. The advantages of adopting advancing technologies can prove vital to overall business health and profitability. However, onboarding these technologies in a nondisruptive manner can also be expensive and challenging for IT and security teams.
One such technology advancement with seemingly endless promise is the implementation of Kubernetes (sometimes referred to as K8s), a modern distributed architecture platform for running applications as containers. The key design principle for Kubernetes platforms offers built-in capabilities to dynamically and efficiently run key business applications and services at scale. Other significant capabilities provided are enhanced service-level agreements and operational efficiencies, default high availability, auto self-healing, advanced automation capabilities, and more. The ability to run containers and clusters everywhere, not just in the cloud or on-premises, is accelerating adoption with new use cases every day.
All the positives that are gained by adopting this disruptive technology, however, are often overshadowed by new risk and security concerns. To successfully address these concerns around new technology like Kubernetes, lessons can be learned from early adopters and through best practices from the large growing community of users on sites such as CNCF.io and Kubernetes.io.
Kubernetes is complex and follows architecture and design principles not often found or used in existing IT organizations today or in the past. The lack of an existing security posture, along with the need for significant investment, introduces considerable risk in the adoption of new, complex technologies like K8s. Therefore, a common best practice and strategy that offers less risk and faster implementation of Kubernetes is to start out with K8s-as-a-service or as a managed service from providers that are charged with achieving acceptable enterprise-grade security and service levels.
It's important to have a defined security and network posture for how IT operations will organizationally align with this new technology. It is critical to be able to define who across security, network, ops, and app owners owns support for these new platforms from Day 2 onward.
Over the past year, changes and contributions in the Kubernetes community have revealed signs of K8s' technology maturing to enable better alignment across existing skills and tools for IT organizations. Some of the changes include Network Policy V2, which starts to define a need for separation between clusterwide and application-level policies. These separate policies align better with traditional security and developer teams versus having both tiers fall to one or the other team. Additionally, the upstream community has announced the deprecation of Pod Security Policies being handled at the individual pod level by developers and plan to move to a policy applied and managed method that is more scalable.
A critical component, especially as it relates to achieving compliance in Kubernetes platforms, is the clear establishment and visibility of network and security risks in and among clusters. North/south, and east/west traffic flows require defined rules and enforcement to ensure anomalous actions are restricted. The best choice is to have application policies codified using automation so that they can be managed and applied at the very lowest levels, scale, and rate application deployments occur.
Policies should be managed and applied in a tiered manner, where clusterwide policies can be managed and monitored by cluster administrators, and application-level policies such as network and Pod Security can be assigned to developers who are more familiar with the application requirements. These tiered policies must take into account a zero-trust and least-privileged order of precedence so that they do not infringe upon each other.
There is no need to slow the adoption of Kubernetes or cloud because security is too complex. Take advantage of the thought leadership and services available to help you implement quickly while maintaining your security posture. Perhaps most important is to establish or expand your centralized security policy management solution, which allows enterprises to do the following:
- Reduce risk by ensuring security and compliance with real-time visibility, analytics, reporting, intervention, scale, and agility.
- Eliminate complexity by utilizing best practices and implementing a policy as code and zero-trust mandate to eliminate downward pressure on critical staff.
- Maintain agility by embedding automation into your cloud security solutions such as integration into DevOps CI/CD processes for early detection and fast remediation.
- Reduce costs by allowing developers to focus on application development and security teams to define and enforce policy — without compromising agility or security, or causing expensive rework.
The agility of Kubernetes that observes centralized security policies will accelerate the business value being sought by enterprises today.
Read the first part of this series: Centralized Security Policy Management Across Hybrid Cloud Environments Should be an Obvious Strategy.
About the Author
Larry Alston, General Manager of Cloud, Tufin
Prior to joining Tufin in 2019, Larry Alston previously held senior and executive management roles at Teradata, Altisource, FuseSource, IONA, and Excelon. As Tufin champions the adoption of security policy management in the cloud, Alston is responsible for all aspects of Tufin's cloud-native business.