Prevention Is Better Than the Cure When Securing Cloud-Native Deployments
The "OODA loop" shows us how to secure cloud-native deployments and prevent breaches before they occur.
Renowned military strategist John Boyd conceived the "OODA loop" to help commanders make clear-headed decisions during the Korean War. We'll look at how one might apply the OODA loop OODA — that stands for observe, orient, decide, and act — specifically to secure cloud-native deployments and prevent breaches before they occur.
The OODA loop begins with observing how a battle is unfolding, determining all available options, making a decision, and acting on that decision. The chaotic nature of battle requires the leader to constantly reconcile and repeat that process.
We can see the same logic in a cloud-native environment, where it describes how Kubernetes reconciliation works. A Kubernetes controller:
Observes and orientates: Monitors the current state and compares that to expectations (that is, the state that the user has defined for this resource, perhaps through a YAML file).
Decides: Determines whether any resources need to be added or taken away.
Acts: Takes steps to constantly bring the current state in line with expectations.
For example, if you have a deployment, the controller checks how many pods there are, and whether that number matches the replica count for that deployment. Each pod is a collection of containers that acts as a "deployable unit" of application code, and the replica count defines how many pods should be running at this point in time If the current number of pods doesn't match this count, the controller creates or destroys some pods to bring the numbers into line.
You can apply the same OODA loop model for security behaviors. You can detect behavior, compare it with what you expect to see, decide whether that's something you want to allow, and take remedial action if you see something unexpected. The question is, how can you detect whether an unexpected behavior has occurred?
Containers are really helpful for simplifying the problem of detecting anomalies, especially if you architect your applications using a microservices model. Each container typically performs only a small function and that means the range of normal, expected behaviors is small. For example, it's often true that you only expect to see one specific executable running inside of a given container. If you can observe the executables running in each container, you can see whether they match your expectations.
During my presentation at the Cloud Native Computing Foundation's Kubernetes Forum Sydney 2019, I walked attendees through a live demonstration which illustrates this. You can find a video of this demo, along with all the other presentations from the event, here.
As part of this demo, I showed a script using a tool called Tracee to alert me about new executables that start in containers. My script is a native security tool that applies the OODA loop model by monitoring the new executables, looks at their names, decides whether one is bad, and, if so, kills the pod — basically, pulls an emergency rip cord. However, should enough time lapse between when the bad executable is discovered to when you take remediation action, the attacker may be successful in exfiltrating data or dropping some sort of malicious payload that takes action later. Not so secure after all!
Here's another problem with relying on security tools that react to bad behavior after it happens: The Kubernetes reconciliation loop kicks in and recreates all those pods that my script destroyed; they're doing the bad thing again, and so they're destroyed again, and on and on it goes. My native security tool is at odds with the Kubernetes reconciliation loop.
What would be better is gaining the ability to prevent those bad pods from being deployed in the first place. If you can determine the intention is to run something bad, you don't have to try to stop it after it runs. So, better that the OODA loop looks at the intention, compares it with the expectation, then decides whether or not to allow or prevent that behavior.
The key is to look earlier in the deployment pipeline for places where you can insert preventative measures. If you can prevent bad software from being deployed at all, it can't do any harm. Therefore, anything we can do before runtime is preventative and more effective.
Scanning images is how we can look inside images for known vulnerabilities. Depending on your scanner, you may also be able to detect malware and prevent those images from being deployed — perhaps blocking them from being pushed into your registry. You can use rule-based access control to stop unauthorized users from deploying software as another method to prevent malicious code from spreading. And you can use admission control like Open Policy Agent (OPA) to check the YAML as it's being deployed and prevent it from running if it doesn't meet your criteria. Some security tools can even provide preventative measures within a running container, by preventing unauthorized programs from executing (as opposed to killing them after they have already started). [Editor's note: The author's company is one of a number that offer such a tool.]
If you're thinking about securing your home or your office, which would you do first: Put a lock on the door, or invest in video surveillance cameras and systems? Of course, you install the door lock first — it's the easiest and most effective thing to do. That's access control, and the perfect example of an effective preventative measure.
It's a good thing to have multiple layers of defense, so you might want to add video surveillance on all the doors. But you should always prioritize access control over observation tools. The same applies to securing your Kubernetes environments.
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024