The era of applications: microservices and containers in the cloud

It’s finally time to turn the page. We’re now right into the era of applications. They now are dictating how the rest of the world of IT should behave. Infrastructure has no longer the spotlight but it’s taking the passenger’s seat and merely delivering what it’s being asked for. What I sort of predicted almost 3 years ago in “Why the developer cloud will be the only one” is now happening.

That’s no good news for someone in this industry, I know. It’s much easier to talk (and sell) dumb CPU, RAM, storage space than it is about continuous integration, delivery, runtimes, inherently resilient services or even things like the CAP theorem. Sorry guys, if you want to remain in this industry, it’s time to step up and start understanding more about what’s happening up there, at the application level. Luckily, we have things that help us do so. Things like thenewstack.io (cheers @alexwilliams and team) who’s doing an awesome job introducing and explaining all these new concepts to the wider audience (including me!).

And to learn more about this phenomenon is also why I am going to attend KubeCon in London next week. For those who don’t know, KubeCon is the conference of Kubernetes, a Google-stewarded project for containers orchestration and scheduling. Some people will say it’s just one out of many right now but for us, at Flexiant, it’s just *the* one, as it possess the right level of abstraction to deliver container-based distributed applications. I’m going there to listen to industry leaders, innovators and just anyone who’s fully understood the new needs and who’s working every day to solve these new problems in new ways.

If you still don’t know what I’m talking about, read on. I’m going to take a step back and tell you a bit more about the drivers that caused applications to take over and why we needed different architectures, such as microservices, to be able to efficiently deliver and maintain that software that’s eating the world. If you’re short of time, you can simply watch the embedded video at the top of the page which should tell roughly the same story.

TL;DR

Software is now pervasive in people’s everyday’s life and it’s handling many of the new business transactions. Application architectures had to evolve in order to be able to cope with the increase in demand. Microservices is the optimal software architecture that combined with cloud infrastructure and containers can successfully fulfil new application requirements.

However, its numerous advantages are counter balanced by an increased complexity, which requires new orchestration tools that are able to join the dots and hold a global vision of how things are going. To achieve this, orchestration needs to happen at a higher level of the stack than what we had been previously seeing in the previous infrastructure era.

The rise of microservices

We can comfortably acknowledge how the way we do business has changed, how we see more and more transaction happening just online and how software is the only mean to achieve them. Software that, by the global nature of the relationships, needs to cope with a large number of users. It also needs to constantly deliver performance, to satisfy its user base as well as new features to keep up with competition, serve new needs and unlock new opportunities. You can easily understand how the traditional way of developing applications could just not power this kind of software. Monolithic applications were just too hard to scale, slow to update and difficult to maintain; on the other hand, the recently hyped PaaS was just too abstracted (compromising on developers’ freedom) and expensive to deliver the required efficiency.

That’s when microservices came in as the new preferred architectural pattern. Of course they had been around for a while even before they were called so but, as Bryan Cantrill (@bcantrill) said “[…] only now that we gave a name to it, it has been able to spread much beyond that initial use case”. That means that whenever we manage to label something in IT, this helps with diffusion, adoption and it serves as a baseline for further innovation. This has happened with cloud computing, and we see this happening again. For once, thank you marketing!

What exactly are microservices? We call microservices a software architecture that breaks down applications into many atomic interdependent components that talk to each other using language-agnostic APIs. A single piece of software gets broken into many smaller components, each of them publishing a API contract. Any other piece of that application can make use of that microservice just by addressing its API and without knowing anything about what’s behind it, including which language it’s written in, or which software libraries it uses. That unlocks a number of benefits, like single components that can be developed by different people, shared, taken from heterogeneous sources and reused a number of times. They can be updated independently, rolled back or grown in number whenever the application as whole needs to handle more workload. All of this without having to tear down the giant, heavy and slow monolith. Sounds great? Thumbs up.

Infrastructure for microservices

Setting up the infrastructure to host such distributed, complex and ever changing software architectures would be a real challenge, if we did not have cloud. In fact, infrastructure-as-a-service is a just no brainer to host microservices. Why? Because it provides commodity infrastructure, because you pay for what you use, because it’s just everywhere near your end users and because it never (well, almost) runs out of capacity. But what makes it so perfect for microservices is its programmability, required whenever a distributed application needs to deploy and re-deploy again and again, while adapting its footprint to its workload requirements at any given moment. We couldn’t have done it with traditional data centre software. Full stop.

When people think of IaaS, they think about virtual machines, virtual disks and network. Let’s call it traditional IaaS. And if you look at it, it’s not really fit for purpose for what we described above. In fact, virtual machines have zero visibility of what’s going on on top of their OS, let alone the interdependencies with other virtual machines hosting other services! So, we’ve seen things like configuration management systems (Chef, Puppet, Ansible, etc) taking over this part and, at the completion of the OS boot, to execute a number of configuration tasks to reach the full application deployment. It sort of worked so far, but at what price? First, virtual machines are slow. They need to be commissioned and then full OS needs to come up before it can execute anything. We’re talking about 20-30 seconds if it’s your lucky day up to several hours if it’s not. Second, virtual machines are heavy. The overhead that the hypervisor carries is just nonsense, as well as all that other multi-process and multi-user functionality that their OS was born to deliver, when in reality they simply need to host a little – micro – service. And configuration management systems? Even slower. Let alone their own weight (I’m looking for example at the full Ruby stack with Chef) they typically rely on external dependencies that, unfortunately, can change and generate different errors every time they are called in.

There was the need for something better. Oh wait, we already had something better! Containers. They existed for a while but they were having a seat in the previous infrastructure-centric IT world that was dominated by the expensive feature-full virtual machines. Guess what, containers understood (yeah, they apparently have a thinking brain!) that they could make the leap into the new application-centric world and shake hands to software developers. That’s how they recently become – rightly so – popular, as I wrote before in “The three reasons why Docker got it right”. Containers are just right to host microservices because (1) they’re micro as well, and they can start in a fraction of a second, (2) they’re further abstracted from the infrastructure and hence have no dependency on the infrastructure provider, (3) they are self-contained and don’t rely on external dependencies that can change but, most importantly, they are immutable. Any change within the container configuration can trigger the re-deployment of a new version of the container itself, which then can be rolled back if the result is not what we were expecting.

New challenges for new opportunities

As it happens every decade or so, the tech world solves big problems by tackling them from the side with disruptive solutions that unlock tremendous new opportunities. The veterans of this industry see these solutions and think “wow, that’s the right way of doing it!”, no question. However, new approaches typically also open up to a number of unprecedented challenges that could not even exist in legacy environments. These new challenges demand new solutions and that’s where the hottest (and most volatile) startup scene is currently playing a game.

Kubernetes, Docker, Mesos and all their eco-systems are right there, trying to overcome challenges that arise from the multitude of microservices that need to operate independently (providing a scalable and always available service) as well as with each other, cooperating to make up the whole application’s business logic. Networking challenges coming from microservices that need to communicate in a timely, predictable and secure way over the network. Monitoring challenges as you need to understand what’s going on when an end user presses a button, if any of the components is suffering from performance and needs to be scaled. And not to forget organisational challenges that come from when you potentially have so many teams working together on so many independent components that involve security, adaptability and access control, to name a few.

In the end, as we can’t stop software from eating more pieces of the world, we simply can try to improve its digestive system. Making software transparent to end users to generate positive emotions and ease transactions, while helping businesses not to miss any opportunity that’s out there are the ultimate goals. Making better software is a just a mean to get there and microservices, cloud and containers are headed in that direction. See you at Kubecon EU!

Checklist: is my app ready for the cloud?

The cloud is finally losing a bit of the hype and many organizations’ CIOs heard enough that are now ready to do something real with it. And that question comes to their mind: which application do I move first?

Enough has been said about the choice between IaaS, PaaS or SaaS that I assume the first step to the cloud will be towards raw infrastructure, giving up a bit of the sovereignty but still keeping all the power to architect and manage applications.

But the first moves to the cloud will lead many CIOs into a few mistakes. First off, they will think of the cloud as a simple shift of responsibility regarding the infrastructure management, thus making the cloud adoption become only a matter of SLA, data integrity and security.

As a consequence of the same assumption, they will think they could probably move their business critical applications over to the cloud “as is”, looking for a cloud provider that offers exactly the same manageability and features as the ones they were used to in their own data centre.

The cloud is a tremendous opportunity to start thinking differently

I’ve read two interesting articles recently that contain a couple of very important points about doing things with cloud infrastructures. The first one titled “Which Apps to Move to the Cloud?” starts by quoting Forrester research saying:

[…] you shouldn’t be thinking about what applications you can migrate to the cloud. That isn’t the path to lower costs and greater flexibility. Instead, you should be thinking about how your company can best leverage cloud platforms to enable new capabilities. Then create those new capabilities as enhancements to your existing applications… you have to think differently as you approach cloud development. There’s far more power in application design and configuration once you free yourself from assumed reliance on the infrastructure. The end result is new degrees of freedom for developers – if you embrace the new model.

Later, the author goes through the different types of applications being used in the enterprise comparing them to the layers of onions (yeah, just like Ogres). The inner layers are applications with most innovation, intellectual property and value to the company core busines, the outer layers are commodity apps. His conclusion is that maybe the outer layers are better to start with when moving to the cloud.

Again it’s only about risk. Let’s start with a lower risk (of loosing data or interrupting business processes) in exchange of the popular “more flexibility at lower cost” of the cloud.

The second article (very smart read, IMHO) appeared on cio.com tries to think of the cloud in the enterprise world, something that has very few success stories so far, listing some very important advices, one of them attracted my attention:

[…] a leading cloud provider would never consider adding any application to its portfolio without a clear plan for how it will scale over time. Corporate IT? Not so much. “They build infrastructure to scale out,” Paquet says, “but if their applications don’t, what problem have they actually solved?” Think scale first. And that may mean ruling out many packaged application. “Most of them are not built to scale out,” says Paquet.

What we understand from these two articles is that the cloud gives you a new infrastructure footprint that “enables new capabilities” and thus it’s you who have to adapt yourself to the cloud and not vice versa. Moreover, you have “more power in application design and configuration” that application architecture does make a difference.

Ok but… technically speaking, what exactly can I move out today?

Now that we got the above statements, we still want to start moving something to the cloud today and we don’t want to develop everything from scratch adding up delay to our cloud adoption. What both articles don’t explain clearly is: what are the technical characteristics of the applications that I can move to the cloud?

Here’s a checklist that helps you understanding if your application is cloud-ready:

  1. The application must be designed to scale by adding different instances of the same application process one next to another on different machines, applying some kind of mechanism to share the workload without depending on the OS.
    This methodology, that results into a both scalable and resilient app, is strictly required when moving to the cloud as you don’t know what kind of hardware is being used underneath (you can actually easily assume they’re just commodity servers)
  2. The application data store must be partitionable. If you have a high amount of data growing linearly, then you can split it into different chunks, each to be bound to one of the application nodes.
  3. The data store partitions should be able to be replicated on other nodes in order to achieve redundancy.

If you are running any application and that matches the above patterns, you can feel free to move it to the cloud today without worrying about loosing data or interrupting your business processes, provided that you make good use of the application configuration capabilities!

As I’m pretty sure you’ll have to go through some architectural review, while doing that, keep in mind to think only at the application level with nothing strictly depending on the operating system. This will give you extra freedom to migrate between cloud providers and complete self-sufficiency to implement your highly available application tiers in the way you prefer.

If you want to dig deeper into these principles, I advise you to read over Amazon Dynamo Paper that explains the theory and the trade-off between consistency and availability and that inspired great cloud-ready applications like Riak noSQL key-value store.

In conclusion, the cloud enables commodity IT infrastructures at extremely low price. With this in mind, you simply can’t demand that if you move your single instance database onto one virtual machine in the cloud, this will never go down. On the other hand, cloud infrastructures today offer all the mechanisms and features that, if mastered, can help you building the most highly available application clusters you ever had before.

Cloud computing is not the evolution of virtualization

Many of you may probably think that after the success of virtualization technology they had to invent something appealing to keep pushing sales and they called it Cloud Computing. And the same people would think that cloud computing is just an extra layer on top of your virtualization management platform for better and coordinated resource management, that provides things like billing, machine catalogues, self-provisioning, etc.

Cloud Computing actually has a much wider meaning (that sometimes makes it simply look like a marketing trend) so today I will narrow it down and focus on cloud infrastructures. The questions I will try to answer are: what is a cloud infrastructure, and when can you say you’re really running your business in the cloud?

To provide the right answers, you have to think of the applications that you want to run on your IT infrastructure. Many of you have probably gone through the server consolidation process that made VMware a billion dollar company: you had lots unused hardware resources but you still wanted to separate operating environments so, no problem, hardware virtualization could solve that for you, without the need to change anything in your application code or architecture. The same application you were running before on the bare-metal would run exactly in the same way inside a virtual machine.

After server consolidation practices became common, somehow the evolution of hardware virtualization went much faster than the evolution of applications. Hypervisor vendors started to provide more and more features to make the underlying hardware always available for running applications, so they could endlessly run without even caring about potential hardware failures.

What people tend to forget when buying powerful hardware platforms is that application failures are much more the primary reason of outages than hardware failures. For this reason, sooner or later you realize that and you have to build up an application-level redundancy in order to implement a real highly available system. But with application-level redundancy, do you still need to have underlying expensive hardware? Why not to run your application on commodity servers?

This question will lead to the real concept of cloud computing. Let’s now try to give a definition: a cloud infrastructure can be called so if it:

  • is scalable and elastic
  • provides process automation (self-provisioning / self-service / billing)
  • is highly available
  • provides full multi-tenancy

And what is the purpose of all of the above? If you think carefully, you’ll realize that it’s all aimed at commoditizing the infrastructure itself. Companies shouldn’t spend anymore time to build up their IT foundations but they should concentrate on their actual business workflows, supported by really innovative applications. Infrastructure is something they want to take for granted.

In this scenario, a cloud platform should have another important characteristic: it has to be cheap.

So can you achieve all of that with a traditional hardware virtualization-powered infrastructure? No.

Scalability will be an issue if you’re using centralized resources (that can’t grow big forever) that are usually necessary for providing hardware-level HA.

You will feel safe thanks to all those automatic live machine migration features but don’t forget that they protect you only from hardware failures. If the application fails there is not much they can do for you. You should protect yourself from application failures by building a redundant application architecture but, if you do so, do you still need expensive hardware-level HA? No, you don’t.

And one more thing, cost. Hardware virtualization infrastructures require complex high-end hardware that won’t get the point of being cheap in order to turn the IT infrastructure into a commodity.

In the end, do you want to run your old legacy application in the cloud? Forget it. Just keep it on your powerful expensive virtualization platform. That will work just fine. But if you’re a visionary who believes in a future that requires performant, scalable, elastic and cheap commodity IT infrastructures, then choose your next applications to be cloud aware. That will take you much further, much faster.