First AWS, Now Microsoft Cloud; Who's Next?
Outages are inevitable, but how can we deal with them better?
So, there we have it. Within a window of a few months, the top two public cloud providers on the planet -- Amazon Web Services LLC and Microsoft Cloud -- have had bodily seizures that have caused the rest of us (mere cells in their ecosystem) to go into crazy orbits. Enough of the drama, let's get to facts. In this age of information deluge it would not be presumptuous to assume that the reader may have forgotten the specifics, so let's recollect.
The Amazon Simple Storage Service (S3) had an outage on Tuesday, February 28. An authorized S3 team member who was using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. However, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended. And the rest, as they say, is history!
Now let's turn to the Microsoft episode. On Tuesday, March 21, Outlook, Hotmail, OneDrive, Skype and Xbox Live were all significantly impacted, and trouble ranged from being unable to log in to degraded services. True to form, the Microsoft response was to downplay the impact and provide little detail (by contrast, Amazon provided a much more detailed post mortem). A subset of Azure customers may have experienced intermittent login failures while authenticating with their Microsoft accounts. Engineers identified a recent deployment task as the potential root cause. Engineers rolled back the recent deployment task to mitigate the issue.
So, is this the death of public cloud? Nah. Far from it. And anyone who says otherwise should have their head examined. BUT, it should serve as a wake-up call to every IT, security and compliance professional across every industry. Why? Because this kind of "user error" or "deployment task snafu" can happen anywhere -- on-premises, on private cloud and on public cloud. And since every enterprise is deployed on one or more of the above, every enterprise is at risk. So enough of the fear mongering. What does someone do about it? Glad you asked.
There are really three vectors of control: scope, privileges and governance model.
Scope is really the number of "objects," a.k.a. the nuclear radius of what each admin (or script) is authorized to work on at any given time. Using the Microsoft Cloud example (I realize I am extrapolating since they have not provided any details), this may be the number of containers a deployment task can operate on at any given time.
Privileges calls for controlling what an administrator or task can do on the object. For instance, continuing with the container example from above, the privilege restriction could be that the container can be launched but not destroyed.
And finally, you need a governance model. This is really the implementation of best practices and a well-defined policy for enforcing the above two functions -- scope overview and control enforcement -- in a self-driven fashion. In this example, the policy could be to ensure that the number of containers an admin can operate remains under 100 (scope) and that any increase in that number automatically requires a pre-defined approval process (control). Further sophistication can easily be built in, where the human approver could easily be a bot that checks the type of container and the load on the system and approves (or denies) the request. Bottom-line -- checks and balances.
So there you have it. The two large public clouds have suffered embarrassing outages in the past month. They will recover, get stronger and most likely have future outages as well. The question for the rest of us is what we learn from their experience and how to make our environments in our own data centers and on private and public clouds better! If we don't, we may not be lucky enough to fight another day.
— Ashwin Krishnan, SVP, Products & Strategy, HyTrust
Read more about:
Security NowAbout the Author
You May Also Like