Why developers won’t go straight to the source

I’m so excited. On last Wednesday Flexiant has announced the acquisition of the Tapp technology platform and business. I met the guys behind it quite a while ago and I have never refrained from remarking how great their technology is (see here). I recognized a trend in their way of addressing the cloud management problem and I’m so glad to be part of, right now.

Disclaimer. I am currently working for Flexiant as Vice President Products. I have endorsed this acquisition and I am fully behind the reasons and convinced of the potential of it. This is my personal blog and whatever you read here has not been agreed with my employer in advance and therefore it represents my very personal opinion.

Right after the acquisition (read more about it here) we’ve heard tremendous noise on social networks and the press. David Meyer (@superglaze) of GigaOm in particular wrote up a few interesting comments and he picked up well the reasoning behind it, but he also ended the article with an open question:

“This [the Tapp technology platform] would help such players [Service Providers] appeal to certain developers that are currently just heading straight for EC2 or Google.
 
Of course, this is ultimately the challenge for the likes of Flexiant – can anything stop those developers going straight to the source? That question remains unanswered.”

Well, I’d like to answer that question and say why I’m actually convinced there is a lot of value to add for multi-cloud managers.

Much has been written these days from the business side of the acquisition and I don’t have anything meaningful to add. Instead, I would like to raise a few interesting points from a technology point of view (that’s my job, after all) and unveil those values that are maybe not so obvious at the first sight.

Multi-cloud management

Multi-cloud management per se has a very large meaning spectrum. There are multi-cloud managers brokerage, therefore primarily on getting you the best deal out there. Despite this is a good example about how to provide a “multi-cloud” value, I’m still wondering how they can actually find a way to compare apples with oranges. In fact, cloud infrastructure service offerings are so different and heterogeneous that being simply a cloud broker will make it extremely difficult to succeed, deliver real value and differentiate. So, point number one: Tapp isn’t a cloud brokerage technology platform.

Other multi-cloud managers deliver value by adding a management layer on top of existing cloud infrastructures. This management layer may be focused on specific verticals like scaling Internet applications (e.g. Rightscale) or providing enterprise governance (e.g. Enstratius, now Dell Multicloud Manager). By choosing a vertical, they can address specific requirements, cut off the unnecessary stuff from the general purpose cloud provider and enhance the user experience of very specific use cases. That’s indeed a fair point but not yet what Tapp is all about.

So why, when using Tapp, developers won’t “go straight to the source”? Well, first of all, let’s make clear that developers are already at the source. In fact, to use any multi-cloud manager you need an AWS account or a Rackspace account (or any other supported provider account). You need to configure your API keys in order to enable to communication with the cloud provider of choice. So if someone is using your multi-cloud manager, it means that he prefers it over the management layer provided by the “the source”.

The cloud provider lock-in

One of the reasons behind Amazon’s success is the large portfolio of services they rolled out. They’re all services that can be put together by end users to build applications, letting developers focus just on their core business logic, without worrying too much about queuing, notifying, load balancing, scaling or monitoring. However, whenever you use one of the tools like ELB, Route53, CloudWatch or DynamoDB you’re locking yourself into Amazon. The more you use multi-tenant proprietary services that exist only on a specific provider, you won’t be able to easily migrate your application away.

You may claim to be “happy” to be locked in a vendor who’s actually solving your problems so well, but there are a lot of good reasons (“Why Cloud Lock-in is a Bad Idea“) to avoid vendor lock-in as a principle. Many times, this is one of the first requirements of those enterprises that everyone is trying to attract into the cloud.

Deploying the complete application toolkit

Imagine there could be a way to replicate those services onto another cloud provider by building them up from ground up on top of some virtual servers. Imagine this could be done by a management layer, on demand, on your cloud infrastructure of choice. Imagine you could consume and control those services using always the same API. That would enable your application to be deployed in a consistent manner across multiple clouds, exclusively relying on the possibility to spin up some virtual servers, which you can find in every cloud infrastructure provider.

This is what Tapp is about. And the advantages of doing that are not trivial, these include:

1. Independency, consistency and compatibility

This is the obvious one. For instance, a user can click a button to deploy an application on Rackspace and another button to deploy a DNS manager and a load balancer. These two would provide an API that is directly integrated into the control panel and therefore consumable as-a-service. Now, the exact same thing can be also done on Amazon, Azure, Joyent or any other supported provider, obtaining the exact same result. Cloud providers became suddenly compatible.

2. Extra geographical reach

Let’s say you like Joyent but you want to deploy your application closer to a part of your user base that lives where Joyent doesn’t have a data center. But look, Amazon has one there and, despite you don’t like its pricing, you’re ready to make an exception to gain some latency advantages to serve your user base. If your application is using some of the Joyent proprietary tools, it would be extremely difficult to replicate it on Amazon. Instead, if you could deploy the whole toolkit using just some EC2 instances, then it all becomes possible.

3. Software-as-a-(single)-tenant

If multi-tenancy has been considered as a key point of Cloud Computing, I started to believe that maybe as long as an end user can consume an application as-a-service, who cares if it’s multi-tenant or single-tenant.

If you can deploy a database in a few clicks and have your connector as a result, does it really matter if this database is also hosting other customers or not? Actually, single-tenancy would become the preferred option1 as he would not have to be worried about isolation from other customers, noisy neighbors, et al. Tony Lucas (@tonylucas) wrote about this before on the Flexiant blog and I think he’s spot on, there is a “third” way and that’s what I think it’s going mainstream.

The Tapp’s way

The Tapp technology platform was built to provide all of that. A large set of application-centric tools, features and functions2 that can be deployed across multiple clouds and consumed as-a-service.

Of course it’s not just about tools. It’s also about the application core, whatever it is. The Tapp technology solves also that consistency problem by pushing the application deployment and configuration into some Chef recipes, as opposed to cloud provider-specific OS images or templates3. Every time you run those recipes you get the same result, in any cloud provider. In fact, to deploy your application you’ll just need the availability of vanilla OS images, like Ubuntu 14.04 or Windows 2012 R2 that, honestly, are offered by any cloud provider.

All those end users who want to deploy applications without feeling locked in a specific providers, today had only one way of doing it: DIY (“do-it-yourself”). They would have to maintain and operate OS images, load balancers, DNS servers, monitors, auto-scalers, etc. That’s a burden that, most of the time, they’re not ready to take. They don’t want to spend time deploying all those services that end up being all the same, all the time. Tapp takes away that burden from them. It deploys applications and service toolkits in an automated fashion and provides users just with the API to control them. And this API is consistent, independently from the chosen cloud provider. This is the key value that, I believe, will prevent developers from going straight to the source.

1. Multi-tenancy would be the preferred option for the Service Provider because this would translate into economies of scale. However, economies of scale often obtain cost optimisation and end user price reduction and, therefore, it can be considered an indirect advantage for end customers as well.

2. Tapp features include: application blueprinting with Chef, geo-DNS management and load balancing, networking load balancing, auto-scaling based on application performance, application monitoring, object storage and FDN (file delivery network).

3. It worths mentioning that pushing the deployment of application into configuration management tools like Chef or Puppet significantly affects the deployment time. That’s why it’s strongly advised to find the optimal balance between what is built-in the OS image and what is left to the configuration management tool.

My thoughts on the Inktank acquisition

Following the news today regarding the acquisition of Inktank by Red Hat, lots of people – me included – started to wonder how would this potentially conflict with the Gluster acquisition completed by Red Hat itself nearly two years ago. Twitter has been hot all through the day on this topic and, as one would imagine, the two companies tried to give specific explanations on how Ceph and GlusterFS are actually complementary. I’m not a GlusterFS expert but I did attend recently an exciting Ceph training course and I have some doubts on what has been said. As I see lots of overlaps between the two, the truth could simply be the failure of GlusterFS to gain traction in cloud infrastructures implementations.

First off, I’d like to congratulate all the folks at Inktank for this awesome result. And not only because they’re partnering with Flexiant, the company I work for, or because some of them are former colleagues of mine, but because they’ve successfully managed to demonstrate how innovation can be still delivered in areas like IT infrastructures, that are typically considered as heavily commoditised.

A blog by Jeff Darcy (@Obdurodon) published today went just through the comparison of the architectures, the approaches and how that the two projects could actually borrow stuff from each other. But what was stated more clearly on Twitter on their positioning is the following:

Ok then Red Hat is gonna promote GlusterFS for POSIX-compliant distributed file systems and Ceph for distributed object and block store? Really? What if a company wants a combination of the three? Are they going to roll out two separate physical infrastructures with incompatible architectures? How about the famous TCO optimisation?

If you think about it, a file system, a block store or an object store are simply three different interfaces to consume the same type of resource: data storage. For this reason, I believe that Ceph’s approach of a unified layer underneath (RADOS) and the different interface gateways (FS, block and S3-compatible object store) is the exact right approach to providing a truly scalable, resilient storage system that can be consumed through different APIs. Now it’s also true that CephFS is not a mature technology at all but if I had to pick one to concentrate my storage development investments in the future, I’d pick Ceph.

When Red Hat acquired Gluster two years ago, the company was still betting its entire cloud orchestration strategy on CloudForms (based on Deltacloud) but then OpenStack became so popular that Red Hat, leader in anything open-source, had to commit to that. Today OpenStack counts quite a few installations with Ceph as storage backend, probably more than those with GlusterFS, and I believe this is the real reason behind the acquisition.

That said, I find this an extremely smart move, given that Red Hat now has to compete against all those hundreds of companies providing OpenStack support and consulting services. Owning the technology behind one of the most popular storage implementations could be a real way to differentiate from the competition. However, I still wonder if this is enough in order to stand out from the OpenStack consulting crowd. Of course Red Hat is a rock solid company with a brand, a history and reputation that scares off most competitors, but that’s nothing if not coupled with a high pace in delivering technology innovation.

Who’s the clever one? The cloud or the application?

During my recent experience at Structure:Europe I have engaged in a discussion regarding whether the “workload” should “care” about the cloud or not. It is a great topic to debate on and I decided to write a few more thoughts here below. But let me recall the conversation first.

The trigger was a sentence by @ditlev, CEO of OnApp, that we also heard during his session with @tonylucas during the second day of Structure, and that I commented on Twitter with:

Among others, @khushil, engineer at Mail Online, did not agree with me as we read his tweet replying: “wrong way around, the cloud needs to understand the workload to scale to support. other way misses the point“.

Ditlev obviously picked this up and explained further that “workloads should be agnostic, your infrastructure (incl cloud)/platform should adapt” and that he would “like to see abstraction layers between workloads and infrastructure“.

What’s the source of the workload?

One may be confused and agree in principle with everyone, as every tweet seems to be reasonable, but I think first we should make some assumptions upfront, starting from the definition of workload:

The amount of work performed by an entity in a given period of time, or the average amount of work handled by an entity at a particular instant of time. The amount of work handled by an entity gives an estimate of the efficiency and performance of that entity. In computer science, this term refers to computer systems’ ability to handle and process work.

In computer science, we indeed have several abstraction layers and that happens also with cloud computing (more insights in one of my previous post here). In such scenario, however, there are also many “entities” performing some work at those different layers. So which one is the entity whose workload we were talking about? Given what our companies do, I bet we were referring to cloud infrastructures handling and processing work that is generated by applications running on top of them.

Clarified the context, I shall explain why applications should instead care about the cloud without expecting any magic to happen down there.

The new era of IT infrastructures

We are witnessing a tremendous change in the core functioning of IT infrastructures. Up to the advent of cloud computing, the general approach taken by IT professionals was to manually provision a specific footprint made of servers, CPUs, memory, storage and network devices. That footprint was probably over-sized in order to accommodate predictable workload growth over time. Applications were designed to abstract from infrastructures, they were simply demanding more CPU cycles or IOPS whenever they wanted to, regardless of the actual availability. The result of this approach has been a tremendous increase in the total cost of ownership of IT departments. Hardware was required to be reliable, fast and able to accommodate peaks in workload without any performance loss. This all came at a price.

Two main drivers came to disrupt and trigger a drastic change. The first one is mobile computing, i.e. the demand of Internet services that suddenly became ubiquitous, leading the generation of unpredictable workload demand from anywhere in the world, at any time of the day. The second driver is the growing availability of a large quantity of data, user and machine generated Big Data, that require to be stored and analyzed.

To accommodate the above scenarios, IT infrastructures had to become completely software-driven, highly elastic and extremely scalable. With cloud infrastructures, today it is in fact possible to provision an infrastructure footprint using a few mouse clicks or a couple of functions within a few lines of code. The size of the infrastructure can be adapted to the required workload in a specific moment, no more need to over-provision, as resources can grow and shrink using few simple automatic operations.

And with the availability of software-consumable infrastructures, also applications are changing their approach, becoming much more infrastructure-aware. In fact, in case of resource shortage, applications can request for more by using API calls, growing the infrastructure footprint as required. At the same way, they’re now able to handle faults, making expensive highly available infrastructures completely worthless! (I blogged about this before here).

Think application!

All of this to explain that no, the cloud (infrastructure) does not have to understand the workload and does not have to automatically adapt to it. Even if that could be theoretically possible, the infrastructure lacks of the right metrics to recognise a real need for more resources. Instead, it is the application itself that actively adapts its own infrastructure, because only the application understands how the user experience is going, which the only metric that should be taken into consideration when measuring application performance.

Are you currently auto-scaling your infrastructure based on CPU utilisation? When it happens, are you sure that corresponds to a real improvement of the user experience? Or simply to a higher bill of your cloud provider?

And what if your VM goes down? Will you blame your cloud provider and then blog about its ridiculous SLA penalty fees, that never corresponds to your real loss of business? Isn’t it more effective to make sure that you understand the VM is down and that you (your application, I mean) take the necessary step to failover elsewhere?

With a clever application, infrastructure can be seen just as a toolbox. And you need to know how to use those tools in order to build highly available, auto-scaling applications. Don’t expect your screwdriver to build on your behalf the room for your newborn baby.

Or more simply, as @AmberCoster of AppDynamics said to me in another Twitter conversation: “Think application, not just infrastructure!”.

Why the developer cloud will be the only one

It has been a while since my last blog. My new engagement with Flexiant is keeping myself so busy with customers that I have plenty of reasons to think a lot but not enough time to write what I’m thinking about. Customers are indeed one the most interesting sources for people like me who always try to identify common patterns in behaviour and decisions (that are the “technology trends”, this blog’s previous title).

Yesterday, James Urquhart (@jamesurquhart) wrote about “traditional IT buyers” versus “developers” consuming cloud services, with reference to the different strategical approach taken by VMware in its vCloud Hybrid Service proposition. Just the fact that we have to word “traditional” while speaking about one of the most innovative sector of our industry, says it all about what will be the winning approach between the two.

Indeed, I concur with James when he says that VMware’s strategy will be successful in the next few years but will likely fail on the long term. Following his brilliant observations, I still feel like adding something more for all those companies who are trying (or who want) to intercept some cloud computing business by running after enterprise IT departments.

The IT infrastructure demand

First off, IT departments don’t want to get to the cloud. This is known and has been said enough, mainly due to their self-preservation instinct. But they also can’t resist this revolution. So they’re interpreting the cloud opportunity just as the shift of their data centre location, demanding the exact toolset they’ve been using in their on-premises deployment. At the other side, service providers are trying to listen to those customers and give them what they are asking for. Usually, nothing gets real because the offer hardly meets the demand, mainly due to the actual unwillingness of the IT crowd to outsource data centre control. And even if all their requirements seem to reveal the existence of a real market opportunity, I believe they’re actually driving a false demand.

Let’s take a look at the chart below. It’s far from being accurate as it’s not based on real numbers. Lines are straight as I simply wanted to help visualising the trend of what I think it’s happening.

On the overall demand of IT infrastructure, the one represented by the green area is generated by IT professionals, while the blue one is generated by developers or applications (yes, don’t forget that applications are now capable of driving IT infrastructure demand all by themselves, based on workload triggers).

Today we’re at that point in time where those two areas overlap, meaning that while there is still demand of infrastructure by IT departments, it is showing a declining pattern. According to James, VMware is really going to win the biggest slice of that declining green area but will fall short because the green area is going to disappear. And most service providers who are evolving from their colo/hosting business really seem to run after the same green demand, fighting for a shrinking market, while others are just making billions thanks to developers and their applications. And we hear traditional (ah! This word again) service providers calling that type demand simply “test and dev”?

Test and dev or software-consumable infrastructure?

We really need to move away from thinking the Amazon cloud is good for test and dev and not for production. So many times I heard the objection from providers that Amazon is just for developers because of they lack of built-in HA in the infrastructure. Needless to remind how many transaction-sensitive companies are making billions on that cloud.

People who say that are actually missing the point. Developers like and use the Amazon cloud because it is software-consumable. Because their applications can spin up a server to handle extra capacity at the same way they handle their core business logic. Also, they can handle infrastructure failure by replicating data-stores, maybe in different geographic sites, and managing failover in case of outage of the underlying infrastructure. This is the real potential of IaaS and not just the OPEX vs CAPEX scenario, not only the commoditisation of computing and storage resources, but the ability to fully automate the infrastructure provisioning and configuration as part of any application business logic.

Build and launch your developer cloud

Now that we know what developers like about the cloud, should we forecast enterprise developers stop deploying applications on-premises and finally do that in the cloud, thus becoming part of the blue demand? That’s what many people may think, but then, I’ve come across this from James Urquhart again:

I was asked to speak at the Insight Integrated Systems Real Cloud Summit in Long Beach, Calif. […] The RealCloud audience was primarily medium size businesses (between 500 and 10,000 employees), and I jumped at the chance to meet a segment of the IT industry with which I rarely interact.

About half way through my talk […] I came to a point I thought was very important to most software developers. On a whim, I asked this audience how many of them saw custom software development as a key part of their IT strategy. I expected about half the room of 100 or so to respond positively.

One hand went up at the back of the room. (It turns out that was someone from NASA’s Jet Propulsion Laboratory. Well, duh.)

Boom. Any discussion about why developers were bypassing IT to gain agility in addressing new models was immaterial here. The idea that Infrastructure as a Service and Platform as a Service were going to change the way software was going to be built and delivered just didn’t directly apply to these guys.

Well, that explains the success of packaged software so far. But does that mean that the demand for hosting packaged software will gradually move to the cloud? I don’t believe so. Instead, I believe that packaged software will simply be re-invented via SaaS model. It has been already demonstrated how that model can be extremely successful and broad in its adoption (Salesforce.com, Workday, etc).

I believe enterprises will eventually move from in-house hosted packaged software directly to consuming SaaS without passing through intermediate steps, like running the same packaged software in some “enterprise” or “IT” IaaS clouds. Everyone’s waiting for enterprises to start moving their workload, but eventually they will make a big jump all at once. Maybe even without letting their IT departments contribute to that choice (as it’s mainly a business decision).

In the end, if you’re a service provider who aims at finally getting the enterprise workload to the cloud, you’d probably better focus on standing up a software-consumable cloud (a.k.a. “developer cloud”) and go after all those SaaS companies, small and big, to get your slice of the real cloud market. Stop thinking your HA cloud is better than “test and dev” clouds (like Amazon?) and stop discarding the web-scale companies (as SaaS companies are sometimes referred to) from your target market but give them the importance they deserve. If you don’t do that, one day, you may lose the entire enterprise IT business at once.

Why I picked Flexiant as my next challenge

Dear all, I am really happy and proud to announce that I am joining the Flexiant team starting this week. In the last few years, Flexiant has been building a stunning Cloud Management Platform with the goal of enabling service providers to join cloud space in few easy steps, and with the possibility to still highly differentiate their service.

The cloud infrastructure market landscape is but in its final configuration and I have the ambition to actively contribute to how it will look like in the next few years. I am joining Flexiant in a moment when the cloud industry is facing a terrific growth, with just a bunch of players out there, still immature technologies, vendors struggling to adapt their business model and a general misperception around cloud services. There is plenty of work to do!

But let me give you a little bit more of insights about why I have picked Flexiant and what great things I think we can do together.

A differentiated cloud service

I enjoyed observing the recent signs of a required differentiation in the cloud infrastructure market. After a large consensus around certain technologies, with such a big (and growing) market to conquer, competition is getting tougher, as more players try to come onboard everyday. Although price initially appears as the main competition driver, considering the impressive cloud services portfolio of Amazon Web Services, highly differentiated service offerings will be required for those who seriously aim at competing against the giant.

Why would I want to compete with that giant? Can it be enough for me to offer some complementary service in order to exploit the market reach of Amazon, instead of going against it? Well, we all know the consequences of a unique-player dominated market, we’ve seen it before (Microsoft, Oracle, etc.) and we all can concur that during those times innovation has been slower than ever, with the abuse of dominant positions that negatively affected the customer experience. The opportunity out there is big and I don’t think we want to leave the entire market to one player again, do we? And if the goals of the cloud is to commoditize technology by offering it as-a-service, it’s right there, on the service side, that there is need and opportunity to innovate.

Recently we have had a concrete proof of this need for differentiation. The acquisition of Enstratius by Dell was driven by the need for a highly differentiated cloud service that fills the gap between commodity infrastructures and enterprise requirements. I was lucky enough to have the opportunity to work with the Enstratius team and I can tell they were winning deals whenever it was about governance and compliance, all typical enterprise requirements. But the real news there was Dell dropping its previously announced OpenStack-powered cloud service, something that will never come to life instead. All those players betting on OpenStack wanted to make it the industry standard for building cloud infrastructure and now what? They suddenly remembered they have to compete with each other. And the imperative is: differentiate!

On this matter, our own Tony Lucas (@tonylucas), European pioneer of cloud services and SVP product at Flexiant (if you don’t believe check out this video of Tony talking about cloud with Jeff Barr of AWS back in 2007), has written an extensive White Paper where he scientifically goes through why cloud federation is not the optimal model for competing in the IaaS market, with differentiation as the winning alternative. Beside suggesting everyone in this industry to read it carefully, it reminded me of the biggest failure of cloud federation we have just recently witnessed: vCloud providers. The launch of VMware hybrid cloud service is the clear demonstration that federating providers with the same technology but different cultures, goals and SLAs, does not work. It can be a short term opportunity for the “federatable” cloud software vendor, but a secure failure on the mid-long term. Read Tony’s to understand exactly why.

A matching vision

For those who know me, I am a public cloud only believer. “Private cloud” was just a name given legacy vendors who didn’t want to give up on their on-premises business while having the opportunity to exploit the marketing hype and sell extra stuff to their rich customers. “Hybrid cloud” is how we are naming the period it takes to complete the journey to the public cloud.

Again, the most recent moves of the big guys confirm that public cloud is the way to go. Legacy software vendors are trying to convert themselves into service providers, mostly by acquiring companies rather than innovating from inside (e.g. yesterday’s news on IBM multi-billion acquisition of SoftLayer). So should we foresee a public cloud market dominated by AWS and challenged only by few other big whales? I don’t think so. If AWS really “gets” the cloud, the internal cultural conversion needed within traditional vendors will be painful and won’t really bring anything substantial at least for the next 3 to 5 years. Their current size and the internal resistance to give up on recurring revenue derived from on-premises business, will not let them be a real challenge to AWS in the near term. Instead, small, agile, highly innovative and differentiated niche players are those which will eventually contribute defining the next cloud infrastructure market landscape.

For more scientific evidence of why public clouds will take over the world, I can suggest another brilliant read by Alex Bligh (@alexbligh), the Internet rock star who has been behind Nominet (the UK domain registry) and currently CTO at Flexiant. His detailed methodical analysis led him to a conclusion:

And [so] will be for cloud computing: it’s not the technology that matters per se, it’s the consequent effect on economics. Private cloud is in essence an attempt to use cloud’s technology without gaining any of the efficiencies. It is for service providers to educate their customers and prospects, and the audience will often be financial or strategic as opposed to technical.

Alex Bligh, CTO at Flexiant

An enthusiastic choice

Visionaries like Tony and Alex, a mature product like Flexiant Cloud Orchestrator and the guidance and business savviness of our CEO George Knox (@GeorgeKnox) are all ingredients that will eventually lead to making some real difference in the coming months. Finding myself aligned to the company vision and culture, I am really enthusiastic to be on board and I foresee big things ahead of us. Stay tuned and ping me if you want to know more about Flexiant!

ABOUT FLEXIANT

Flexiant is a leading international provider of cloud orchestration software for on-demand, fully automated provisioning of cloud services. Headquartered in Europe, Flexiant’s cloud management software gives cloud service providers’ business agility, freedom and flexibility to scale, deploy and configure cloud servers, simply and cost-effectively. Vendor agnostic and supporting multiple hypervisors, Flexiant Cloud Orchestrator is a cloud management software suite that is service provider ready, enabling cloud service provisioning through to granular metering, billing and reseller whitelabel capabilities. Used by over one hundred organizations worldwide, from hosting providers, large MSPs and telcos, Flexiant Cloud Orchestrator is simple to understand, simple to deploy and simple to use. Flexiant was named a ‘Gartner Cool Vendor’ in Cloud Management, received the Info-Tech Research Group Trendsetter Award and called an industry double threat by 451 Group. Flexiant customers include ALVEA Services, FP7 Consortium, IS Group, ITEX, and NetGroup. Visit www.flexiant.com.

Virtualization no longer matters

There is no doubt. The product is there. The vision, too. At times, they leave some space to arrogance as well but, come on, they are the market leader, aware of being far ahead than anybody else in this field. A field they actually invented themselves. We almost feel like forgiving that arrogance. Don’t we.

The AWS summit 2013 in London has been just one more time the confirmation that the cloud infrastructure market is there, the potential is higher than ever and that Amazon “gets” it, drives it and dominates it quite undisturbed. All the others struggle to distinguish themselves among a huge amount of technology companies, old and new, who are strongly convinced of having jumped into the cloud business but, I’m pretty sure, the majority of their executives thinks that cloud is just the new name for hosting services.

Before going forward, I want to thank Garret Murphy (@garrettmurphy) for having transferred his AWS summit ticket to me, without even knowing who I was, but simply and kindly responding to my tweeted inquiry. I wish him and his Dublin-based startup 247tech.ie the required amount of luck that, coupled with great talent, leads to success.

Now, I won’t go through the whole event, because being this a roadshow which London wasn’t the first edition, much has been said already here and here. The general perception I had is that AWS is still focusing on presenting the advantages of cloud-based as opposed to on-premises IT infrastructures, showing off the rich toolset they have put in place and eventually bringing MANY (I counted nearly 20 ones) customers testifying how they are effectively using the AWS cloud and what advantages they got doing that. Ok, most of them were the usual hyper-scale Internet companies but I’ve seen the effort to bring enterprise testimonials like ATOC (The Association of Train Operating Companies of the UK). However, they all said to be using AWS only for web facing applications, staging environment or big data analytics. Usual stuff which we know to be cloud friendly.

What really impressed me was the OpsWorks demo. OpsWorks was released not long ago as the nth complementary Amazon Web Service to help operating resilient self-healing applications in the cloud. Aside from the confusion around what-to-use-when, given the large number of tools available (and without considering those from third parties which are growing uncontrolled day by day), there is one evident trend arising from that.

For those who don’t know OpsWorks, it is an API-driven layer built on top of Chef in order to automate the setup, deployment and un-deployment of application stacks. An attempt to the DevOps automation. How this is going to meet customers’ actual requirements while still keeping simplicity (a.k.a. without having to provide a too large number of options) is not clear yet.
During the session demonstrating OpsWorks, the AWS solution architect remarked that no custom AMIs (Amazon Machine Images) are available for selection while creating an application stacks. Someone in the audience immediately complained on Twitter about this, probably because he wasn’t happy about having to re-build all his customizations through Chef recipes on top of lightweight basic OS images, discarding them from his custom VM image.

In fact there are several advantages of moving the actual machine setup to the post-boostrap automation layer. For example, the ease of upgrading software versions (e.g. Apache, MySQL) simply by changing a line in a configuration file instead of having to rebuild the whole operating system image. But mostly because, keeping OS images adherent to the clean vendor releases, you probably will find them available in other cloud providers, making your application setup completely cross-cloud. Of course there are disadvantages too, including the delay added by operations like software download or configuration that may be necessary each time you decide to scale-up your application.

Cross-cloud application deployment. No vendor lock-in. Cool. There is actually a Spanish startup called Besol that is building its entire (amazing) product “Tapp into the Cloud” on the management of cross-cloud application stacks, leveraging a rich library of Chef cookbook templates. And while I was writing this post on a flight from London, Jason Hoffman (@jasonh) was being interviewed by GigaOM and, while announcing a better integration between Joyent and Chef, he mentioned the compatibility between cloud environments as a major advantage of using Chef.

What we’re observing is a major shift from leveraging operating system images towards the adoption of automation layers that can quickly prepare for you whatever application you want your virtual server to host. That means that one of the major advantages introduced by virtualization technology, that is the software manipulation of OS images, one of the triggers of the rise of cloud computing, no longer matters.

Potentially, with the adoption of automation platforms like Chef, Puppet or CFEngine, service providers could build a complete cloud infrastructure service, without employing any kind of hypervisor. And this trend is further confirmed by facts like:

Of course there are still advantages for using a hypervisor, because certain applications require architectures made of many micro-instances for performing parallel computing, thus it’s still necessary to slice a server into many small portions. However, with the silicon processors increasing the number of cores and the ability of using threads, virtualization may not be so important anymore for the cloud.

In the end, I think we no longer can say that virtualization is the foundation of cloud computing. The correct statement could perhaps be that virtualization inspired cloud computing. But the future may leave even a smaller space for that.

IaaS eats the biggest slice

When I read market research firms saying that SaaS is the most adopted cloud model by the enterprises I can’t but concur due to the ease of use and the simplicity of integration with existing IT assets. Actually, the integration ends up being minimal and entirely in developers’ hands, who can make use of the SaaS service usually comprehensive API, thus completely bypassing their internal IT department.

So what about IaaS and PaaS? Should those who invested heavily in those two cloud models start worrying about their choice? No way. As my provocative title says, I am fairly much convinced that the lower layers of the cloud stack eventually share the whole cloud business, with IaaS eating the biggest slice of it, both directly and indirectly.

I am actually writing this post to give further insight and supporting data to a tweet of mine I wrote some time ago:

Now let’s see what I mean by indirectly.

Layers over layers

In computer science we are used to have layers over layers called “abstraction layers“, each one of them aimed at hiding the complexity of the lower one, while providing some kind of added value and an interface for the immediately upper layer to access resources. With the rise of cloud services, the approach of the community has been the same again: using abstraction layers to handle the increased complexity of IT infrastructures, which now involve thousands of resources to be managed and orchestrated.

As mentioned above, there are three main cloud layers largely accepted by the community: IaaS, PaaS and SaaS. However, many cloud providers don’t fit exclusively in one of them as they tend to enlarge their offering with different services at multiple layers of the stack. Since this creates a little confusion among cloud consumers, I want to take the opportunity to present them one more time from a different perspective, trying to concentrate on what added value each layer brings to the stack.

Ok, I still have to work a bit on my ability to visually represent concepts but I hope the above chart can help making some clarification. First, we have raw resources at the bottom of the stack and if we add some elasticity we obtain an IaaS. This is over-simplified as there are certainly more values brought by any good IaaS layer, however, for the sake of understanding, I’ll limit myself to the most evident one: elasticity, a.k.a. the ability to create, destroy, enlarge and shrink computing resources on demand via an API.

Let’s now go upper, we have an IaaS layer and we decide to add some DevOps tools and operations such as middlewares, auto-scaling, application deployment and code validation mechanisms. While doing that, if the principle of abstraction layers is respected, we don’t need to care anymore about how to handle raw resources, since the IaaS provides us with tools to automate their management. What we obtain is a Platform-as-a-Service, an environment where multiple users can deploy their applications.

Eventually, let’s take some business logic to solve a specific problem (i.e. CRM, ERP, etc) and, provided of course that we have done all the multi-tentancy stuff and that we want it to be consumed as-a-service, we are now working at the SaaS layer. At this stage, we can concentrate on making our software more powerful, adding killing features and conquering our market niche. We don’t need (neither we do want, right?) to take care of all the infrastructure to serve our users nor we want to know what hardware lies underneath, as those would be just a distraction from our core business focus.

Sounds logical doesn’t it? All the layers stack up together so nicely and they look so complementary. Indeed they are. In fact, cloud companies end up buying services from other cloud companies that operate at a lower level of the stack. For further evidence, I have done a small research and I found out that most SaaS companies deploy their software on top of a PaaS provider that, itself, deploys its automation layer on top of one (or more) IaaS providers. What does that mean? That if an enterprise adopts a SaaS cloud service and pays for it, eventually some dollars will end up in some IaaS providers’ pocket. You like it or not.

The infrastructure of PaaS providers

To bring supporting examples, let’s check the most popular PaaS providers infrastructures as they’re most likely obliged to reveal their backends in order to inform their customers on their data center locations.

Most popular PaaS services rely on IaaS providers
PaaS Provider Supported Languages IaaS backends
Heroku Ruby, Java, Python, Node.js AWS
Engine Yard Ruby, PHP, Node.js AWS, Verizon Terremark
AppFog PHP, Java, Node, .NET, Ruby AWS, Rackspace, HP Cloud Services
OpenShift PHP, Java, Node.js, Python, Ruby AWS, Rackspace
Nodejitsu Node.js Joyent
AppHarbor .NET AWS
CloudBees Java AWS, HP Cloud Services

The cloud market is known to be huge and it is mandatory for every player in the IT industry today to take up a position, a vision and a direction within this space. If you’re an investor who wants to participate in the cloud opportunity, it is extremely valuable to understand how different cloud models are currently sharing the market. On the other hand, if you’re an enterprise evaluating the adoption of any cloud service, you should be concerned about who’s running the games up and down the cloud stack, as this will eventually affect you service level, your security and your data integrity.

POST UPDATE on 4/8/2013

I’ve been asked by Jack Clarke (@mappingbabel) of ZDnet on what basis did I single out the above PaaS providers as “most popular”. The answer I gave him is press coverage as well as “on the field”, meaning talking to customers and gathering experiences. It’s a simple personal feeling which is not based on any scientific data. I’m actually a field person and not a researcher. Besides, I don’t think any of those provider is really willing to disclose customer data.

However, it’s noteworthy to mention there are other PaaS services offered by large vendors that are difficult to define in terms of popularity; the press usually refers to the vendor as a whole and since they’re no longer in the startup phase, you can’t even measure the funding amount they’ve received from VCs. Despite the difficult measurability, I owe them a mention in this post for being active players in the PaaS landscape, contributing effectively to the cloud awareness battle.

And one can assume the above theory is respected by the above providers as well, for example with Elastic Beanstalk running on top of EC2 and App Engine running on top of Compute Engine. However, given those services are provided by the same vendor as the PaaS provider, they don’t trigger any economic transaction and thus no real shift in the measurement of the market size.

The truth on enterprise private clouds

Oh yes!

It feels so great when someone among the most recognized high tech analysts out there writes down exactly what you think. It’s an endorsement of your own thinking to read James Staten (@Staten7) from Forrester Research on “Why your enterprise private cloud is failing”, where he describes so clearly what you’ve always been thinking and trying to explain.

His blog is saying two important things:

  1. Enterprise private clouds are failing. As I’ve also written on a Quora answer to “What is the future of private cloud?”, no matter what marketing and vendors are saying, efficient, large scale production enterprise private clouds don’t exist as of today. In my opinion, cloud is an extremely new model in delivering IT infrastructure that the culture of its utilization won’t be able to reach the enterprise with a bottom-up approach (evolving from their current infrastructure) but only taking a top-down direction (deploying into public clouds and then migrating back in-house). A revolution as opposed to an evolution.
  2. Enterprise private clouds are failing due to the wrong approach taken by the IT department. Treating the cloud just like an infrastructure stack instead of a service, because “you are building the private cloud without engaging the buyers who will consume this cloud”, Staten says.

And of course, I wasn’t the only one recognizing “the truth” in James Staten’s words. His opinion on failing private clouds echoed throughout the web, generating a large consensus among cloud experts and visionaries such as James Urquhart (@jamesurquhart):

The two cloud models

Much has been already written about different approaches to the cloud and big brains have concluded that all of them can be summarized in two different cloud models. They have been given various names according to the author, but I shall refer to the nomenclature of the OpenNebula blog post.

  1. Datacenter Virtualization model: cloud as an extension of virtualization in the datacenter. Some more automation, service catagloue, etc. VMware vCloud-like approach.
  2. Infrastructure Provision model: a powerful service-oriented API to provision effectively and efficiently commodity computing resources. AWS-like approach.

With reference to the above models, James Staten is basically saying that the Datacenter Virtualization cloud model is wrong. That is not the right approach to implementing a private cloud. Because “a Porsche is [not] just a Volkswagen with better engine, tires, suspension and seats.”

Awesome. I’ve been convinced about that for some time. If you read my very first post on “Cloud Computing is not the evolution of virtualization”, as the title says, I’ve been always considering exclusively the Infrastructure Provision model as the only possible cloud implementation, completely excluding the Datacenter Virtualization to be even called cloud.

And I don’t think this was an extremist approach. As I said many times, cloud is a tremendous opportunity for the enterprise to start thinking differently. In my opinion, cloud will be able to reach the enterprise IT departments only using a top-down approach: from a public cloud implementation to back in-house. Enterprise cloud consumers will try (and love) the public cloud and eventually drive the implementation of something similar within the enterprise itself. But trying to transform the current virtualized infrastructure into a private cloud will simply fail. Fail to deliver a real elastic and service-oriented cloud infrastructure to the real cloud consumers.

Vendors didn’t get it

So what? All enterprise IT departments simply didn’t get it? What’s their problem? It’s a vendor problem. Enterprise software vendors didn’t get it. Every one of them started to think of the cloud as an opportunity (that’s good, as a matter of principle) and they all just tried to profit from the hype. For virtualization technology vendors, that was an easy path: adding a new product to their portfolio to “cloudify” the existing virtualization products, that would have been a natural extension to existing implementations within the enterprise. The perfect scenario for IT departments. Pity that it doesn’t work to deliver what cloud consumers are looking for.

But recently we heard something new from virtualization vendors. They actively started perceiving public clouds, and AWS in particular, as a threat to the workload which is (was?) currently running on their virtualization technology and that’s failing to migrate to private clouds for the above reasons. Despite their very rich cloud products portfolio, workload is still moving from the enterprises to commodity public clouds. Why?

Hearing VMware CEO Pat Gelsinger saying that he finds hard to believe they cannot beat a company that sells books, makes me think they really didn’t get the point at all. Good luck guys.

There is no such thing as the cloud uptime

Yesterday readwrite.com featured an article by Mike Pav titled “Storm Warning: Why 100% Cloud Uptime Is Impossible” and I thought it was such a piece of misinformation that I decided to write this blog post to help clarifying a few things, as I’m really fed up to hear about the unreliability of “the cloud” in general terms.

Titles are usually provocative and I won’t judge its veracity, however there is no such thing as the “Cloud Uptime” because, despite the cloud is considered as a whole, you can imagine that it is made of thousands of components and not all of them go down at once. Therefore, the outage within a cloud service tends to be bigger the more these components are interdependent. I’m going to explain this more in details.

Cloud Outages

The article says “Cloud Outages” are eventually inevitable because doing better than 99.99% availability would cost too much and companies like Netflix (which suffered its cloud provider outage right on Christmas Eve) would still continue using the cloud just because eventually “it does a great job of providing ready-to-use features”. In other words, it says that using the cloud requires a compromise that companies with multi-million businesses are ready to take: losing money from time to time in exchange of the flexibility of the cloud. My dear, I refuse to believe that.

First off, cloud providers do things differently and we can’t generalize. Let’s narrow down to AWS as this is the cloud provider the article mainly refers to. AWS is primarily an IaaS provider with some service components operating at the PaaS layer, such as the ELB (Elastic Load Balancer). In this context, there is no such thing as a “Cloud Outage” but there is the outage of a component of the cloud that your application relies on and that your application has not been instructed to handle in case of failure.

When working at the PaaS layer your freedom is limited. On one hand, you don’t have to worry about how things work underneath because the provider does everything for you but, on the other hand, you also have to rely on it when it comes to availability and SLA. Netflix relied on ELB and their application had no other way to handle its failure than waiting for AWS to fix the problem.

So how should Netflix prevent such things from now on? As others have also said, they should just build their own load balancing service by operating at the IaaS layer. In this case, they would have the freedom and the responsibility to set up multiple LBs in different availability zones or even different data centers, making their application more resilient in case of any infrastructure outage.

The responsibility of a PaaS provider

Later, the article goes through a list of PaaS provider duties in case of an outage. When I read it the second time I figured out that the term PaaS was misused as the author was instead referring to a generic provider offering any kind of services through the cloud.

However, this gives me the chance to say that a real PaaS provider should never ever suffer from any underlying infrastructure outage. The PaaS software should be the very best example of highly available resilient application, architected to exploit most of the isolation/redundancy mechanisms made available by the underlying IaaS. In the end, a PaaS provider employs mostly DevOps who master cloud automation tools and best-practices and who do know how to make an application resilient.

Moreover, a PaaS cloud is not about elasticity or scalability, as the article says, but those two come from the underlying IaaS: it’s the infrastructure that scales, it’s the infrastructure that grows and shrinks fast. Whereas PaaS is all about about automation: automated deployment, auto-scaling, automated failover and recovery on infrastructure failures.

What cloud uptime is about

In conclusion, more than 99.99% is actually possible and there are examples of that. Joyent is one that managed to deliver 99.9999% of uptime in the last 2 years. So how to build more reliable clouds? Simply by architecting an infrastructure with the least possible number of interdependent components. A cloud infrastructure made of distributed and replicated micro-components is capable of delivering scalability and reliability while limiting the impact of an outage, preserving the overall SLA.

Two things to keep in mind for the best uptime of your application in the cloud:

  1. Choose an IaaS provider with an architecture designed to limit the impact of outages. If this sounds too theoretical, then think about EBS (AWS Elastic Block Store) which is a centralized macro-component highly dependent on the network.
  2. Choose to have the freedom to build your own resilient app at the IaaS layer and, if you decide to go PaaS, pick a provider with an refund policy in case of outage that is significative enough for your business.

And in the end, Netflix will keep using the cloud because they learnt from this experience and they know that mastering cloud best-practices can save them from the next (indeed inevitable) infrastructure outage.

Checklist: is my app ready for the cloud?

The cloud is finally losing a bit of the hype and many organizations’ CIOs heard enough that are now ready to do something real with it. And that question comes to their mind: which application do I move first?

Enough has been said about the choice between IaaS, PaaS or SaaS that I assume the first step to the cloud will be towards raw infrastructure, giving up a bit of the sovereignty but still keeping all the power to architect and manage applications.

But the first moves to the cloud will lead many CIOs into a few mistakes. First off, they will think of the cloud as a simple shift of responsibility regarding the infrastructure management, thus making the cloud adoption become only a matter of SLA, data integrity and security.

As a consequence of the same assumption, they will think they could probably move their business critical applications over to the cloud “as is”, looking for a cloud provider that offers exactly the same manageability and features as the ones they were used to in their own data centre.

The cloud is a tremendous opportunity to start thinking differently

I’ve read two interesting articles recently that contain a couple of very important points about doing things with cloud infrastructures. The first one titled “Which Apps to Move to the Cloud?” starts by quoting Forrester research saying:

[…] you shouldn’t be thinking about what applications you can migrate to the cloud. That isn’t the path to lower costs and greater flexibility. Instead, you should be thinking about how your company can best leverage cloud platforms to enable new capabilities. Then create those new capabilities as enhancements to your existing applications… you have to think differently as you approach cloud development. There’s far more power in application design and configuration once you free yourself from assumed reliance on the infrastructure. The end result is new degrees of freedom for developers – if you embrace the new model.

Later, the author goes through the different types of applications being used in the enterprise comparing them to the layers of onions (yeah, just like Ogres). The inner layers are applications with most innovation, intellectual property and value to the company core busines, the outer layers are commodity apps. His conclusion is that maybe the outer layers are better to start with when moving to the cloud.

Again it’s only about risk. Let’s start with a lower risk (of loosing data or interrupting business processes) in exchange of the popular “more flexibility at lower cost” of the cloud.

The second article (very smart read, IMHO) appeared on cio.com tries to think of the cloud in the enterprise world, something that has very few success stories so far, listing some very important advices, one of them attracted my attention:

[…] a leading cloud provider would never consider adding any application to its portfolio without a clear plan for how it will scale over time. Corporate IT? Not so much. “They build infrastructure to scale out,” Paquet says, “but if their applications don’t, what problem have they actually solved?” Think scale first. And that may mean ruling out many packaged application. “Most of them are not built to scale out,” says Paquet.

What we understand from these two articles is that the cloud gives you a new infrastructure footprint that “enables new capabilities” and thus it’s you who have to adapt yourself to the cloud and not vice versa. Moreover, you have “more power in application design and configuration” that application architecture does make a difference.

Ok but… technically speaking, what exactly can I move out today?

Now that we got the above statements, we still want to start moving something to the cloud today and we don’t want to develop everything from scratch adding up delay to our cloud adoption. What both articles don’t explain clearly is: what are the technical characteristics of the applications that I can move to the cloud?

Here’s a checklist that helps you understanding if your application is cloud-ready:

  1. The application must be designed to scale by adding different instances of the same application process one next to another on different machines, applying some kind of mechanism to share the workload without depending on the OS.
    This methodology, that results into a both scalable and resilient app, is strictly required when moving to the cloud as you don’t know what kind of hardware is being used underneath (you can actually easily assume they’re just commodity servers)
  2. The application data store must be partitionable. If you have a high amount of data growing linearly, then you can split it into different chunks, each to be bound to one of the application nodes.
  3. The data store partitions should be able to be replicated on other nodes in order to achieve redundancy.

If you are running any application and that matches the above patterns, you can feel free to move it to the cloud today without worrying about loosing data or interrupting your business processes, provided that you make good use of the application configuration capabilities!

As I’m pretty sure you’ll have to go through some architectural review, while doing that, keep in mind to think only at the application level with nothing strictly depending on the operating system. This will give you extra freedom to migrate between cloud providers and complete self-sufficiency to implement your highly available application tiers in the way you prefer.

If you want to dig deeper into these principles, I advise you to read over Amazon Dynamo Paper that explains the theory and the trade-off between consistency and availability and that inspired great cloud-ready applications like Riak noSQL key-value store.

In conclusion, the cloud enables commodity IT infrastructures at extremely low price. With this in mind, you simply can’t demand that if you move your single instance database onto one virtual machine in the cloud, this will never go down. On the other hand, cloud infrastructures today offer all the mechanisms and features that, if mastered, can help you building the most highly available application clusters you ever had before.