What is it about Amazon S3? While thousands of customers are able to securely keep data on the large storage cloud, a disconcerting number of companies seem to lose the ability to correctly configure storage when faced with the Amazon Web Services buckets. The latest misconfiguration victim? Groupize, a meeting management and hotel-booking company based in Boston.
It seems that Groupize defined one of its S3 buckets as "public," which means that anyone who knew the address of the bucket (or figured it out through one of many different hacking techniques) could get access to all of the information stored therein. And what sort of information might they have found?
Contracts and business agreements at one embarrassing level. Credit card authorization forms with all sorts of private information, including names, credit card numbers, expiration dates, and CVV numbers on another. While not on the scale of some recent breaches that exposed millions of customer records, this is a stark reminder that, with enough work, system administrators can make even relatively safe systems vulnerable.
"With enough work" is a critical qualifier here because all S3 storage buckets are, by default, set to "private." It requires a human to specifically override that setting to expose data to the world. And yet, whether in the case of Groupize customers or the entire Chicago voter role there are people who are able to flip the wrong switch, throw open the doors to the data treasure, and invite everyone in for a look-see.
Encrypting everything would make the information in public buckets far less valuable to thieves and vandals. On the other hand, encryption is not without cost and complication, so many organizations make the decision to leave some data -- data that is presumed to be stored securely -- unencrypted and ready for processing.
Others will raise the flag for DevOps, with its process automation that removes as many steps as possible from the hands of humans. That, too, will help, but there are cultural, technological and financial costs that must be borne when an organization decides to move through agile into DevOps. Those costs are worthwhile to some organizations, excessive to others.
Perhaps the only certain lesson to be drawn is one that has been in place for decades, if not centuries: When data and processes are critical, then redundancy is a powerful tool in catching errors before they become problems. Two sets of eyes, two (or more) checks on a configuration list, two stages of verification and testing; each of these can play a role in defeating configuration errors. Ask any customer who has had to monitor their credit for identity theft whether the investment is worthwhile.
- Rackspace Strengthens Its Managed Security Story
- DevSecOps: Security in the Process
- Finding Tools for DevSecOps