Amazon Says Cloud Beats Data Center SecuritySecurity is a shared responsibility between the cloud provider and its customers, says Amazon Web Services security architect.
Slideshow: Amazon's Case For Enterprise Cloud Computing (click image for larger view and for full slideshow)
Is your data secure in the leading public cloud? Steve Riley, Amazon Web Services security architect, responds in no uncertain terms: it's more secure there than in your data center.
In a talk at the Cloud Computing Conference and Expo 2010, he cited several ways Amazon's Elastic Compute Cloud (EC2) operation protects users' data and applications. At the same time he concedes that security in the cloud is "a shared responsibility." Amazon provides a secure infrastructure from "the concrete (of the data center floor) to the hypervisor," is one of his favorite expressions. But an EC2 customer has to write a secure application, transport to the cloud, and operate it in a secure fashion there.
AWS is working on an Internet protocol security (IPsec) tunnel connection between EC2 and a customer's data center to allow a direct, management network to EC2 virtual machines. Such a connection would allow customers to operate their servers in EC2 as if they were servers on the corporate network. And that allows EC2 customers "to imagine the day when they will have no more internal IT infrastructure," Riley asserted, trying to elicit a little shock and awe from his stolid listeners.
Riley addressed the Santa Clara, Calif., event in two separate sessions Tuesday, then gave a third talk Wednesday at a satellite RightScale user group meeting. In each session, he emphasized the security that Amazon Web Services currently supplies, plus what it will do in the near future to beef up its Virtual Private Cloud service. With a direct, frank style, the lanky, shaggy security guru said: "It's my job to help people get more comfortable with what makes people squirm."
For long-term storage, your data may be more secure in the cloud than in your data center because AWS' S3 storage service uses a standard cloud data preservation tactic, invoked by the big data handlers, such as Hadoop: It creates "multiple copies" of a data set, anticipating that a server or disk containing the data may fail. If it does, a new copy is automatically generated from a replicated set and a new primary copy is designated from among at least three copies.
Cloud design assumes disk and server hardware failures will occur and works around such failures, he said. It also stores the copies across two availability zones in the EC2 cloud, meaning a complete power outage or catastrophe could take out one of its data centers and a new data set would be reconstituted from another. Although Riley didn't say so, it appears that a data loss could occur only if at least three hardware failures holding a particular data set in disparate locations occurred simultaneously.
Through this approach, S3 has stored 180 billion objects for EC2 customers over four years and "hasn't lost one of them," said Riley. It's achieved a 99.999999999% (or eleven nines of) data availability, Riley claimed.
1 of 3