You can’t secure what you can’t see. These five best practices will shed some light on how to protect your data from the ground up.

Amrit Williams, CTO, CloudPassage

September 18, 2015

4 Min Read

Moving to the cloud can help organizations accelerate IT delivery and drive business agility. But it can also open up gaping security holes, leaving a company exposed to cyberattack. This means any organization operating in the cloud now must answer these questions: “What cloud servers are being attacked and how will I know?”

Unfortunately, the answers aren’t easy to get. Traditional security tools, like firewalls and intrusion detection systems, work great within an organization’s four walls but they don’t help much when it comes to the cloud. The elastic, dynamic nature of virtual infrastructures makes it extraordinarily difficult for security teams to see what’s happening in the cloud. And without that visibility, it’s impossible for them to enforce consistent policies, detect vulnerabilities, and react quickly to abnormal behavior.

Want help from your cloud provider? That only takes you part of the way. Cloud providers typically don’t protect anything above the hypervisor layer, so security is mainly your responsibility. Say you want to spin up a Windows 2000 server in the cloud, or Red Hat Linux. Security for those instances is your job, not the cloud vendor’s.

It’s called the “shared responsibility” model—and it’s advertised loud and clear by all cloud providers. Amazon Web Services puts it this way: “While AWS manages security of the cloud, security in the cloud is the responsibility of the customer. Customers retain control of what security they choose to implement to protect their own content, platform, applications, systems and networks, no differently than they would for applications in an on-site datacenter.”

Seems straightforward enough. But many customers are still confused. They think that because Amazon has all these great tools for protecting them up to the hypervisor that they’re then completely secure. What they don’t realize is that the security of the cloud instances they choose to spin up will always be their responsibility. Whether you’re operating in the public cloud or in a traditional datacenter, there are still critical control objectives you need to maintain, including data protection and threat management.

The consequences of weak cloud security can be dramatic. I recall the story of a business called Code Spaces that was forced to shut down after a hacker gained full access to its network, which was hosted in the cloud. The hacker demanded a ransom, which Code Space refused to pay. The hacker then deleted all of Code Space’s critical data, effectively destroying the company.

This is the quandary of protecting yourself in the cloud: you can’t secure what you can’t see. Thus gaining real-time visibility is paramount, especially for organizations looking to leverage the many different advantages of cloud infrastructure. And the situation becomes more complex as the organization uses more clouds—public, private, or hybrid—and combines them with its internal datacenters, which aren’t going away anytime soon.

So how do you get visibility in the cloud and ensure that you’re secure? You can start by understanding that security is your responsibility, then adhering to these five best practices.

1.  Continuous visibility. Know what’s going on with your infrastructure, applications, data, and users at all times. Given the automated, elastic, on-demand nature of modern virtual infrastructure, achieving this visibility can be a challenge. But by knowing what you’ve got and what it’s doing at all times, you can limit your attack surface and mitigate risk.

2.  Exposure management. This means adding context to your visibility. Once you gain visibility and transparency, you can successfully eliminate the obvious vulnerabilities that are known to exist within your networks, such as out-of-date workstations and mobile devices.

3.  Strong access control. In fact, weak access control has been responsible for a number of high-profile breaches recently, including the notorious Ashley Madison hack. The Ashley Madison CEO himself has said that the perpetrator of the hack was an insider, probably a third-party contractor, who was granted way more access than necessary. So make sure you have the appropriate access management and privilege monitoring in place. And make sure you are continuously monitoring user activity to ensure there are no deviations from your corporate policies.

4.  Data protection. This is another essential. It means protecting data at rest and data in motion, and also implementing technologies like data loss prevention (DLP) to ensure that, if compromised, your data can’t be sent outside your network.

5.  Compromise management. You must accept the fact that even the most stringent security practices can’t prevent all breaches all the time. They will happen. So prepare to mitigate them when they do. Put processes and technologies in place that enable you to react quickly and subdue security breaches before they get out of control. Create an action plan before breaches happen, and then follow it as soon as a breach is detected.

If you can’t quickly and accurately see what’s going on across your entire infrastructure at all times, you run the risk of not knowing when you’re being attacked or compromised and reacting too late. It’s no use showing up with a hose after your network has been burned to the ground. You need continuous visibility, backed up with comprehensive security functions. These are critical steps toward improving your security posture, especially when you’re dealing with the dynamic, elastic nature of modern cloud computing environments.

About the Author(s)

Amrit Williams

CTO, CloudPassage

Amrit Williams has over 20 years of experience in information security and is currently the chief technology officer of CloudPassage. Amrit has held a variety of engineering, management and consulting positions prior to joining CloudPassage. Previously, Williams was the director of emerging security technologies and CTO for mobile computing at IBM, which acquired BigFix, an entperprise systems and security management company where Wiliams was CTO. Prior to BigFix, Williams was a research director in the Information Security and Risk Research Practice at Gartner, Inc. where he covered vulnerability and threat management, network security, security information and event management, risk management, and secure application development. Before IBM, Williams was a director of engineering for nCircle Network Security, and undertook leadership positions at Consilient Inc., Network Associates, and McAfee Associates, where he worked to develop market leading security and systems management solutions.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights