Recovering from Bad Decisions in the Cloud
The cloud makes it much easier to make changes to security controls than in traditional networks.
During my IT service management career in the military, we used to say, "If you don't build an enterprise with security engineered into the solution, it costs four times as much to retrofit it later." Honestly, I have no idea where that metric came from; we all used to say it, so it must have been true, right? Although I can't definitively state the actual cost, I know it is painful and expensive to engineer security into an enterprise infrastructure after it has been launched.
Some of the most significant challenges include:
Applying proper internal segmentation
Implementing a perimeter security barrier with proper security monitoring and logging
Standardizing security policies and configurations with industry best practices
Orchestrating host-level security controls
In the public cloud, our security team runs into the scenario every day when we add a new customer who didn't know how to leverage one or more of the security controls mentioned above. Most of these customers come to us because they have been breached or have failed an audit. Applying these steps to prevent a compromise would be a difficult task if it were not for the advantage that all our customers are in the cloud. All of those controls are orchestrated by software.
Cloud Advantages
In a traditional network, the least properly implemented security control is sound internal segmentation among security zones (e.g., Web, application, and database, or dev, test, and production) with the best environments using micro segmentation between servers. In a traditional network, fixing this problem is difficult because it could mean making configuration changes to hundreds of networks and switching devices, and potentially verifying patch cords in the data center.
However, in the cloud, this is orchestrated by software and enforced by a hypervisor firewall. Many cloud offerings have made changing these settings easy and accessible through an API that allows you to build visualizations that make it easier to verify these settings are correct. If you don't do this properly as you set up your environment, these settings are easy to adjust after the site launch.
Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada, July 22-27, 2017. Click for information on the conference schedule and to register.
The second challenge is building an architecture with aggregation points that have the right sensors and logging to look for malicious network monitoring. The biggest hurdle is inspecting both north-south traffic from the Internet and east-west traffic between servers. In a traditional enterprise, this may require hundreds of bare-metal devices to be engineered into an enterprise to correct this design flaw and hundreds of configuration-setting adjustments across the network plane and servers.
One of the most helpful features in some clouds is port mirroring. This allows you to place a virtual network intrusion device system (vNIDS) in each of your hypervisor environments and not have to worry about engineering it online. Every virtual network interface card can send a copy of the network traffic (north-south and east-west) hitting each virtual machine to the vNIDS for inspection. Again, these changes to your cloud are software controlled and relatively easy to orchestrate after the fact. However, although we anticipate cloud providers will offer this as a feature, it's not currently available through most offerings. Until it is, leveraging a good host intrusion detection to monitor Layer 3 and Layer 7 traffic at the server level will also provide you good visibility into both north-south and east-west traffic.
The last two obstacles (standardized policies and host level controls) I will group together and state that they are all solved by DevOps automation. Correcting any of these enterprise-wide mistakes in the past required a Herculean manual effort of a system administrator touching every machine to fix a policy violation, add host-level security controls, or correct a flaw in code.
When I was managing the US Army's Global Cyber Security operations for Army Cyber Command, we were applying host-based security controls and security policies on more than 25,000 servers, one at a time. In the cloud, leveraging DevOps automation tools such as Chef or Puppet, organizations can make adjustments to their "recipes" or scripts, basically tearing down and regenerating or respinning the whole environment in minutes.
You can correct bad policies, add log collection agents, and host security tools effortlessly. In fact, most large enterprises don't even bother to apply patches to servers any longer. As part of their agile development process, cloud developers will spin up a test environment with the patches applied. If no issues are found during testing, then the patches are applied to the recipes or scripts for production,and the environment is respun in minutes. This also has the added benefit of causing threat actors to lose access to a compromised server when the new environment is regenerated without their malicious code.
More CISOs are seeing the advantage of leveraging DevOps for orchestration security policy and hardening of servers. In a recent Gartner report, "DevSecOps: How to Seamlessly Integrate Security into DevOps," the analyst firm predicts "by 2019, more than 70 percent of enterprise DevOps initiatives will have incorporated automated security vulnerability and configuration scanning for open source components and commercial packages, up from less than 10 percent in 2016."
So if you're in the cloud and didn't engineer security into your plan, don't despair. There is a silver lining. Orchestrating change in the cloud is much simpler and software-defined. This gives you the opportunity to go back and do it right the second time. Who says you never have a second chance to make a first impression.
Related Content:
About the Author
You May Also Like