AWS Simple Storage Service has proven to be a security minefield. It doesn't have to be if you pay attention to people, process, and technology.

Eric Thomas, Director of Cloud, ExtraHop

July 11, 2018

5 Min Read

Accenture. The United States Department of Defense. Walmart. Experian. FedEx. Verizon. Dow Jones. What do these organizations have in common? All of them have suffered a data breach as a result of a misconfigured open S3 container. Even cloud-native companies like Uber have suffered major data breaches from this common misconfiguration. This failure of process and technology has cost companies tens of millions of dollars and resulted in untold reputational harm, and they have only themselves to blame.

The default configuration for S3 — shorthand for Amazon Web Services' Simple Storage Service — is closed to the public Internet. In that configuration, it's reasonably secure. But there is a problem with relying on this configuration: it assumes that only people within an organization are using it. That is a bad assumption because it's actually very easy to misconfigure S3 in such a way that it's left world-readable (or even writable!).

For example, one "benefit" of the public cloud is that people can easily provision and configure resources for themselves. Random IT Guy in a large corporation needs some storage to deliver content? Spin up an S3 container and everything is good to go! The problem is that when Random IT Guy provisions storage, he doesn't necessarily know how to secure it. Worse, there's nothing to prevent him from doing it insecurely, or to alert anyone else to that fact that it's insecure or even that it exists in the first place.

Shared Responsibility or Blanket Immunity?
Public cloud relies on the "shared responsibility" model, which delineates what vendors and users are responsible for regarding security. The notion of "shared responsibility" is extremely cute. According to this model, cloud vendors — including AWS — are responsible for the security of the cloud itself. Users are responsible for the security of what's in it. This means that any public cloud provider, when faced with a breach of a customer's data, is going to claim the customer was ultimately responsible for the security of the applications and data. Claiming responsibility for only the security of the cloud itself is close to declaring blanket immunity.

Still, there are some real, practical things you can do on three fronts that will make an actual difference in your security posture as it relates to S3 and the public cloud. They involve three things: people, processes, and technology:

People: The Magical Unicorn
Everyone knows we have a talent problem. There's a shortage of cloud talent, a shortage of security talent, and cloud security talent is basically a magical unicorn. If anything has become clear, it's that companies — even cloud-native ones — can't hire the talent they need to make the cloud secure and perform better. The bottom line: stop trying to hire magical unicorns and start creating them. Just like the exceedingly rare "full stack developer" who knows both back end and front end, cloud security professionals are hard to find in the wild but possible to train. Create centers of excellence for cloud security and spend the hours and training dollars to make them leaders. Having expertise in-house will pay dividends in the long run.

Process: Effectively Deploying the Magical Unicorn
When the world's largest shipping company, Maersk, was hit by a ransomware attack in June 2017, it pulled off an unprecedented feat of IT efficiency: Over the course of 10 days, Maersk's IT team reinstalled over 4,000 servers, 45,000 PCs, and 2,500 applications. This Herculean recovery effort demonstrated that Maersk is clearly an incredibly effective technology organization ... but not effective enough to not get breached.

The truth is that magical unicorns aren't going to secure your enterprise if you don't have the processes in place to understand your attack surface at scale. The Maersk breach occurred in its on-premises environment. In the cloud, understanding your attack surface is even harder because it's so easy and cheap to spin up instances. So, when it comes to the cloud, including cloud storage like S3, organizations need to implement processes that control who can spin up instances, create the documentation required to do so, and then put in place audit procedures to make sure those rules are followed. Those audit procedures come back to people: This stuff needs to be someone's job.

Technology: Putting the Horn on the Horse  
Sometimes the only difference between a horse and a magical unicorn is the technology backing them up. In cloud security, the right technology can put the horn on the horse, so to speak, accelerating the transformation from IT pro to cloud security expert. Cloud vendor security tools often fall short of achieving this outcome. Providers like AWS and Azure give you security monitoring tools, but those mostly produce a sea of data points for human experts to make sense of.

While cloud providers offer the bare minimum, many third-party vendors are working to address this challenge. Common feature sets among these vendors include the ability to continuously monitor and update as cloud instances get spun up and down (so you know what your attack surface actually is), as well as tracking the traffic patterns of S3 data to surface potentially problematic activity. Depending on the data set being used — whether it's logs, agents, or network traffic — cloud security professionals can get different perspectives and insights, while the addition of machine learning to many of these offerings is improving the accuracy of alerts.

In summary, S3 has proven to be a security minefield, but it doesn't have to be. Cloud security is an emerging field, presenting an opportunity for smart organizations to lead the way.

Related Content:

Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

About the Author(s)

Eric Thomas

Director of Cloud, ExtraHop

Eric Thomas serves as director of cloud for IT analytics company ExtraHop. Prior to taking this role, Eric led the ExtraHop professional services team, and draws on over 20 years of experience in IT operations.  Before joining ExtraHop, Eric performed a variety of operational roles, most recently as director of advanced engineering for Thomson Reuters, where he led a team of performance and availability specialists, supporting over 200 applications representing $2 billion in annual revenue. His prior experience includes enterprise IT management, SaaS production operations, and next-generation technology advocacy.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights