If your team is manually building cloud instances and networks for every application, you're setting yourself up for a data breach.

Jason McKay, CTO, Logicworks

June 21, 2018

5 Min Read

Forty-eight percent of companies have experienced application downtime, and 20% of companies have experienced a security breach resulting from errors during a manual security-related process. In other words, someone fat-fingered a network or server configuration, or forgot to install a web application firewall on one of thousands of servers. As a result, systems were hacked.

In the cloud, your risk of manual misconfiguration is greater than ever. Your IT team is asked to launch new cloud instances more frequently, on tighter deadlines, and for a wider variety of applications. If they're building a new cloud network for every new application manually, each network will be a snowflake.

The good news is that you no longer need to manually configure cloud environments. If networks and load balancers can be created from the command line, then you can write a program that creates your "ideal" infrastructure stack, complete with networks, security groups (firewalls), server sizes, bandwidth, etc., all with code. This code – usually written in JSON using a templating tool such as AWS CloudFormation, Azure Resource Manager, or a third-party tool like Terraform – is what people mean when they talk about "infrastructure-as-code."

It doesn't matter whether you have legacy applications, custom configurations, or significant technical debt. You, too, can (and must) use cloud infrastructure templates.

Let's Write Some JSON
An infrastructure template is a very simple concept. You tell the cloud platform what you want the environment to look like (in JSON), and the platform takes care of performing the manual actions of provisioning those services. Hand-coding JSON is not a pleasant experience, but it's not complicated. And you can get a head start with prebuilt templates.

In an ideal world, your systems engineers create these templates and then version-control them, either in a GitHub repository or using a tool like AWS Service Catalog. You're not just using these templates to build out an environment once; rather than manually changing your environment, your engineers will change the template and relaunch the entire stack. That means your template is always a true reflection of the configuration of your live systems. Security professionals love this, and your auditors will, too.

Another of the many benefits of this process is that it's easier for security teams to get involved early in the system architecture process, rather than a lengthy review period after build-out. Instead of reviewing test, development, and production environments in a live system, they can review the templates to ensure that all systems built from this template have MFA installed, encrypted volumes attached, and log shipping and hostnames properly configured. For true DevSecOps teams – where security is integrated into application development rather than as a last-minute review – this is critical.

Lessons Learned from Real Teams
Unfortunately, it’s almost always faster to make a quick manual change in the console than to change your template. Wasn't the point of infrastructure-as-code to move faster? Will your developers get mad when it takes an hour to make a change versus two minutes?

In the long term, you're sacrificing a little bit of speed and flexibility for consistency. But there are some speed benefits, too. If you ever want to spin up a test or dev environment that's an exact replica of production, you can do it instantly. Want to test a new product? Your engineers can create an ideal, secure "test" environment, and you can spin it up and down multiple times a day. In order for your IT team to get on board with templates, they need to understand that the benefits of templates outweigh the administrative overhead.

Another common issue is how to organize multiple templates. The answer is to organize multiple "sub-stacks" rather than having a single, global template. Create separate templates by environment type (one for QA, one for Stage, etc.) or by service (one for networks, one for access controls, one for compute instances, etc.). Modular templates mean that it's easier to make a small change without having to affect an entire stack.

The final – and perhaps most painful – limitation of AWS CloudFormation and Azure Resource Manager is that JSON is not a real programming language; you can’t have dynamic variables. That's why these tools are rapidly being replaced by ones that can dynamically generate in JSON using a proper programming language. Examples include Troposphere, cfndsl, and many others. A quick search on GitHub will bring up many projects that generate CloudFormation in a variety of languages, including Ruby, Python, JavaScript, Java, and even Scala. In a few years, we will look back and remember the time we had to hand-code a ton of JSON.

Of course, you can also skip over the AWS CloudFormation or Azure Resource Manager entirely and use an open source tool such as Terraform, which interacts directly with the cloud resources.

Next Step: Infrastructure-as-an-Application
As we add multiple layers of abstraction on top of networking, compute, and storage, we're beginning to treat our infrastructure more and more like an application. Just as the website you're reading now has gone from handwritten HTML to a dynamically generated, flexible web application, infrastructure is following the same evolutionary path.

Infrastructure-as-code is an exciting field where new tools are constantly being developed. Take the time to experiment. Your engineers – and your security team – will thank you.

Related Content:

Why Cybercriminals Attack: A DARK READING VIRTUAL EVENT Wednesday, June 27. Industry experts will offer a range of information and insight on who the bad guys are – and why they might be targeting your enterprise. Go here for more information on this free event.

About the Author(s)

Jason McKay

CTO, Logicworks

Jason is responsible for leading Logicworks' technical strategy including its software and DevOps product roadmap. In this capacity, he works directly with Logicworks' senior engineers and developers, technology vendors and partners, and R&D team to ensure that Logicworks service offerings meet and exceed the performance, compliance, automation, and security requirements of our clients. Prior to joining Logicworks in 2005, Jason worked in technology in the Unix support trenches at Panix (Public Access Networks). Jason graduated Bard College with a Bachelor of Art and holds all five AWS associate and professional-level certifications.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights