Yesterday readwrite.com featured an article by Mike Pav titled “Storm Warning: Why 100% Cloud Uptime Is Impossible” and I thought it was such a piece of misinformation that I decided to write this blog post to help clarifying a few things, as I’m really fed up to hear about the unreliability of “the cloud” in general terms.
Titles are usually provocative and I won’t judge its veracity, however there is no such thing as the “Cloud Uptime” because, despite the cloud is considered as a whole, you can imagine that it is made of thousands of components and not all of them go down at once. Therefore, the outage within a cloud service tends to be bigger the more these components are interdependent. I’m going to explain this more in details.
Cloud Outages
The article says “Cloud Outages” are eventually inevitable because doing better than 99.99% availability would cost too much and companies like Netflix (which suffered its cloud provider outage right on Christmas Eve) would still continue using the cloud just because eventually “it does a great job of providing ready-to-use features”. In other words, it says that using the cloud requires a compromise that companies with multi-million businesses are ready to take: losing money from time to time in exchange of the flexibility of the cloud. My dear, I refuse to believe that.
First off, cloud providers do things differently and we can’t generalize. Let’s narrow down to AWS as this is the cloud provider the article mainly refers to. AWS is primarily an IaaS provider with some service components operating at the PaaS layer, such as the ELB (Elastic Load Balancer). In this context, there is no such thing as a “Cloud Outage” but there is the outage of a component of the cloud that your application relies on and that your application has not been instructed to handle in case of failure.
When working at the PaaS layer your freedom is limited. On one hand, you don’t have to worry about how things work underneath because the provider does everything for you but, on the other hand, you also have to rely on it when it comes to availability and SLA. Netflix relied on ELB and their application had no other way to handle its failure than waiting for AWS to fix the problem.
So how should Netflix prevent such things from now on? As others have also said, they should just build their own load balancing service by operating at the IaaS layer. In this case, they would have the freedom and the responsibility to set up multiple LBs in different availability zones or even different data centers, making their application more resilient in case of any infrastructure outage.
The responsibility of a PaaS provider
Later, the article goes through a list of PaaS provider duties in case of an outage. When I read it the second time I figured out that the term PaaS was misused as the author was instead referring to a generic provider offering any kind of services through the cloud.
However, this gives me the chance to say that a real PaaS provider should never ever suffer from any underlying infrastructure outage. The PaaS software should be the very best example of highly available resilient application, architected to exploit most of the isolation/redundancy mechanisms made available by the underlying IaaS. In the end, a PaaS provider employs mostly DevOps who master cloud automation tools and best-practices and who do know how to make an application resilient.
Moreover, a PaaS cloud is not about elasticity or scalability, as the article says, but those two come from the underlying IaaS: it’s the infrastructure that scales, it’s the infrastructure that grows and shrinks fast. Whereas PaaS is all about about automation: automated deployment, auto-scaling, automated failover and recovery on infrastructure failures.
What cloud uptime is about
In conclusion, more than 99.99% is actually possible and there are examples of that. Joyent is one that managed to deliver 99.9999% of uptime in the last 2 years. So how to build more reliable clouds? Simply by architecting an infrastructure with the least possible number of interdependent components. A cloud infrastructure made of distributed and replicated micro-components is capable of delivering scalability and reliability while limiting the impact of an outage, preserving the overall SLA.
Two things to keep in mind for the best uptime of your application in the cloud:
- Choose an IaaS provider with an architecture designed to limit the impact of outages. If this sounds too theoretical, then think about EBS (AWS Elastic Block Store) which is a centralized macro-component highly dependent on the network.
- Choose to have the freedom to build your own resilient app at the IaaS layer and, if you decide to go PaaS, pick a provider with an refund policy in case of outage that is significative enough for your business.
And in the end, Netflix will keep using the cloud because they learnt from this experience and they know that mastering cloud best-practices can save them from the next (indeed inevitable) infrastructure outage.
This is so nice after reading the article from Readwrite.com. Cloud computing is often dealt very generically without going into details and hence sometimes confusing creatives got build up.
We at Egocentrix (http://www.egocentrix.com) feel cloud is very vast and serious phenomenon, there are many components and technologies involved and hence questioning cloud reliability so generically (based on some controversial events) isn't right.
Thanks for the clarifications here, appreciate.
Don't you work for Joyent? The very company that you tout as being so great at uptime? I think that should have been revealed in your blog post. No matter if you are right or wrong, hiding the fact that you have a financial interest in convincing the readers that you are right is problematic.