Insights from KubeCon EU 2016: Kubernetes vs. reality

Last week in London, the distributed systems community got together at KubeCon EU to talk containers orchestration and Kubernetes. I was there too and I would like to share with you some insights from this exciting new world.

(Sorry for recycling the picture but I simply really liked it! – Credits go to Jessica Maslyn who created it).

Insights from Kubernetes

KubeCon is the official community conference of Kubernetes, despite it was not directly organised by Google, which instead is the by far top contributor of the open source project. Google also sent a few top-notch speakers, whose presence was already a good reason to pay a visit. Kelsey Hightower (@kelseyhightower) first and foremost, with his charm and authentic enthusiasm, was one of the most brilliant speakers, capable of winning the sympathy of everyone and earning respect at his first spoken sentence.

The probably most important announcement made around the Kubernetes project was its inclusion in the CNCF (Cloud Native Computing Foundation) for its governance going forward. This was generally welcomed as a positive initiative, as it has transferred control of the project to a wider committee, but still when the project was mature enough to keep its direction and mission.

Kubernetes is moving at an incredibly fast pace

Some hidden features were revealed during the talks, that even the most advanced users did not know about, and the announced roadmap was simply impressive. We heard users saying “we’re happy to see that any new feature we’ve been thinking of, is already somehow being considered”. This gives an idea of how much innovation is happening there and how much vendors and individual contributors are betting on Kubernetes to become a pervasive thing in the near future.

Its eco-system is doing amazing things

When an open source project just gets it right, it immediately develops an eco-system that understands its value and potential and it’s eager to contribute to it, by adding value on top. This is true for Kubernetes as well, and the exhibit area of the conference brought there the most talented individuals in the industry. I’ve been personally impressed by products like Rancher, that has got really far in very short time (thing that demonstrate clear vision and strong leadership) as well as things like Datadog and Weave Scope, that have shown strong innovation in data visualization, which they definitely brought to the next level.

Has it started to eat its eco-system’s lunch?

This is unavoidable when projects are moving so fast. The border between the project’s core features and what other companies develop as add-ons is fuzzy. And it’s always changing. What some organizations see as an opportunity at first, may become pointless at the next release of Kubernetes. But in the end, this is a community driven project and it’s the community that decides what should fit within Kubernets and what should be left to someone else. That’s why it’s so important to be involved in the community on a day-to-day basis, to know what’s being built and discussed. When I asked Shannon Williams, co-founder of Rancher Labs, how does he cope with this problem, he said you have to move faster, when part of your code is no longer required, just deprecate it and move on. Sure thing, you need to know how to move *that* fast, though!

Insights from reality

As product guy, I get excited about technology but I need to feel the real need of it, in a replicable manner. That’s why my ears were all for customers, end users and use cases.

The New York Times

Luckily, we heard a few use cases at the conference, the most notable of which was the New York Times using Kubernetes in production. Eric Lewis (@ericandrewlewis) took us through their journey from how they were giving developers a server, to enabling developers provision applications using Chef, to containers with Fleet and then Kubernetes. While Kubernetes looks like an end point, and we all know something else is coming next, but according to them, that’s definitely the best thing to deliver developers’ infrastructure at present.

Not (yet) a fit for everything

What stood out the most from real use cases, is how stateful workload is not that seamless to manage using containers and Kubernetes. It was demonstrated that it is possible, but still a pain to setup and maintain. The main reason is that state requires identity, you simply can’t flash out a database node (mapped to a pod) and start a brand new one, but you need to replace it with an exact copy of the one who’s gone. Every application needs to handle state, therefore every application needs to go through this. Luckily, it was said how the Kubernetes community is already working on PetSet that should exactly address this problem. Wait and see!

But the reality today is that Kubernetes is capable of handling only parts of an application. In fact one end customer told me that a great orchestration software should be able to handle both containerised and non-containerised workload. Thumbs up to him to remind us that the rest of the world of IT still exists!

Fast pace leads to caution

This could be a real problem when you have a nascent eco-system that’s proposing equivalent but slightly different approaches to things. Which one to pick? Which horse to bet on? What if my chosen standard will be the one getting deprecated? And whilst competition is good even when it comes to open innovation, this also drives a totally understandable caution from end customers. I kind of miss the time when the standard was coming first and products were based upon them, but now we tend to welcome de facto standards instead, which take some time to prove their superiority.

In the end, what really matters is having more people using Kubernetes. More use cases will drive more innovation and will bring that stabilisation required to convince even the most cautious ones. When people on the conference stage were asked to give some advices on Kubernetes adoption, this is what they said:

  1. Make sure you have someone who supports you business wise. Don’t leave it just a technology-driven decision but make sure the reasons and the opportunities it unlocks are well understood from the business owners of your organisation.
  2. Stick at it. You’ll encounter some difficulties at the beginning but don’t give in. Stick at it and you’ll be rewarded.
  3. Focus on moving to containers. That’s the hard thing in this revolution. Once you do that, adopting Kubernetes will be just a no brainer.

Right, move to containers. We heard this for a while. And containers are one of those not yet standardized things, despite the Open Container Initiative was kicked off a while ago. Docker is trying to become the de facto standard here but this seems to be business strategy driven rather than a contribution to the open source community. In fact, where were the Docker representatives at KubeCon? I have seen none of them.

Disclaimer: I have no personal involvement with KubeAcademy, the organizers of KubeCon, or with any of the mentioned companies and products. My employer is Flexiant and Flexiant was not an official sponsor of KubeCon. Flexiant is currently building a Kubernetes-based version of Flexiant Concerto.

Docker: not just containers. Thoughts from DockerCon Europe

Developers. Developers. Developers. I guarantee this was the most spoken word at DockerCon Europe 2014, the hottest software conference that just took place in Amsterdam last week. I was so lucky to get a ticket (as it sold out in a couple of days!) and be part of this amazing event that, despite a few complaints heard regarding too much of a “marketing love fest”, offered a lot in understanding market directions, trends and opportunities for software vendors.

So what is Docker? A container technology? No. Well, yes, but there is more to Docker. Despite being known as container technology, Docker is mainly a tool for packaging, shipping and running applications. A piece of infrastructure is now a simple means to do something else and requires no infrastructure skills to consume it. With containers now mainstream, the industry has now completed a further step towards making developers the main driver of IT infrastructure demand.

But at DockerCon, Docker employees appointed the project as a “platform” with the goal of making it easy to build and run distributed applications. A platform made of different components that are “included, but removable”. In fact, during one of the keynote sessions, Solomon Hykes (@solomonstre), creator of the Docker project, announced three of these new components that are now available alongside the well-known Docker engine:

  • Docker Machine
  • Docker Swarm
  • Docker Compose

As the community demanded, these three components have not been incorporated in the same binary as the container engine. But with this launch, Docker is now officially stepping into orchestration, clustering and scheduling.

Apart from the keynote, many of the breakout sessions were run by Docker partners, showing lots of interesting projects and more building blocks for creative engineers. In other sessions, organizations like ING Bank, Société Générale and BBC, explained how they use Docker and its benefits, including how Docker helps build their continuous delivery pipeline. Besides adopting the required technology stack, continuous delivery was also described as a fundamental organizational change that companies need to go through eventually. To this point, my most popular tweet during the two days has been a simple quote from Henk Kolk, Chief Architect at ING Bank Netherlands (@henkkolk):

Here’s my paraphrased version of Kolk’s session – Break the silos, empower engineers, build small product development teams and ship decentralized micro services. Cultural and organizational change has been described as important as the revolution in software architecture or cloud adoption. There can’t be one without the other. So you’d better be ready, educated and embrace it.

Docker Machine

The project that caught most of our attention at Flexiant was Docker Machine. It enables Docker to create machines into different clouds directly from the command line. My colleague Javi (@jpgriffo), author of krane.io, has been looking at it since it was a proposal and during the announcement of Docker Machine, we managed to send the very first pull request for the inclusion of a driver for Flexiant Concerto into the project, ahead of VMware and GCE. If Flexiant Concerto driver will be merged over the next days, Docker users will be able to go from “Zero to Docker” (as it was pitched by its author Ben Firshman – @bfirsh) in any cloud, with a single consistent driver. Exciting! We’re absolutely proud of this and we believe we have much more to give to the Docker community, given our expertise in cloud orchestration. Be prepared for more pull requests to follow.

The Risk

Docker has been blowing minds since the first days of the famous video (21 months ago!). It makes so much sense that it’s been adopted with a speed we’ve never seen in any open source project before. Even those who do not understand it are trying to jump on the bandwagon just to leverage its brand and market traction. This doesn’t come without risks. With a large community, an eco-system with important stakes and a commercial entity behind (Docker, Inc.) there will be conflicts of interests, with “overstepping” onto the domain of those partners that helped make Docker what is today. We’ve already seen this with the CoreOS launch of Rocket a couple of days ago.

Docker, Inc. needs to drive revenue and, despite seeing Solomon Hykes make a lot of effort to keep an impartial and honest governance over his baby, I’m sure it’s not going to be a painless process. Good luck Solomon!

The Opportunity

High risks usually mean high potential return. The return here can be high, not just for Docker, Inc., but for the whole world of IT. Learning Docker and understanding its advantages can drive the development of applications in a totally different way. Not having to create a heavy resource-wasting virtual machine (VM) for everything will boost the rise of micro services, distributed applications and, by reflection, cloud adoption. With this, comes scalability, flexibility, adaptability, innovation and progress. I don’t know if Docker will still be such a protagonist over the next year or two, but what I know is that it will have fundamentally changed the way we build and deliver software.

This post originally appeared on Flexiant.com.

Virtualization no longer matters

There is no doubt. The product is there. The vision, too. At times, they leave some space to arrogance as well but, come on, they are the market leader, aware of being far ahead than anybody else in this field. A field they actually invented themselves. We almost feel like forgiving that arrogance. Don’t we.

The AWS summit 2013 in London has been just one more time the confirmation that the cloud infrastructure market is there, the potential is higher than ever and that Amazon “gets” it, drives it and dominates it quite undisturbed. All the others struggle to distinguish themselves among a huge amount of technology companies, old and new, who are strongly convinced of having jumped into the cloud business but, I’m pretty sure, the majority of their executives thinks that cloud is just the new name for hosting services.

Before going forward, I want to thank Garret Murphy (@garrettmurphy) for having transferred his AWS summit ticket to me, without even knowing who I was, but simply and kindly responding to my tweeted inquiry. I wish him and his Dublin-based startup 247tech.ie the required amount of luck that, coupled with great talent, leads to success.

Now, I won’t go through the whole event, because being this a roadshow which London wasn’t the first edition, much has been said already here and here. The general perception I had is that AWS is still focusing on presenting the advantages of cloud-based as opposed to on-premises IT infrastructures, showing off the rich toolset they have put in place and eventually bringing MANY (I counted nearly 20 ones) customers testifying how they are effectively using the AWS cloud and what advantages they got doing that. Ok, most of them were the usual hyper-scale Internet companies but I’ve seen the effort to bring enterprise testimonials like ATOC (The Association of Train Operating Companies of the UK). However, they all said to be using AWS only for web facing applications, staging environment or big data analytics. Usual stuff which we know to be cloud friendly.

What really impressed me was the OpsWorks demo. OpsWorks was released not long ago as the nth complementary Amazon Web Service to help operating resilient self-healing applications in the cloud. Aside from the confusion around what-to-use-when, given the large number of tools available (and without considering those from third parties which are growing uncontrolled day by day), there is one evident trend arising from that.

For those who don’t know OpsWorks, it is an API-driven layer built on top of Chef in order to automate the setup, deployment and un-deployment of application stacks. An attempt to the DevOps automation. How this is going to meet customers’ actual requirements while still keeping simplicity (a.k.a. without having to provide a too large number of options) is not clear yet.
During the session demonstrating OpsWorks, the AWS solution architect remarked that no custom AMIs (Amazon Machine Images) are available for selection while creating an application stacks. Someone in the audience immediately complained on Twitter about this, probably because he wasn’t happy about having to re-build all his customizations through Chef recipes on top of lightweight basic OS images, discarding them from his custom VM image.

In fact there are several advantages of moving the actual machine setup to the post-boostrap automation layer. For example, the ease of upgrading software versions (e.g. Apache, MySQL) simply by changing a line in a configuration file instead of having to rebuild the whole operating system image. But mostly because, keeping OS images adherent to the clean vendor releases, you probably will find them available in other cloud providers, making your application setup completely cross-cloud. Of course there are disadvantages too, including the delay added by operations like software download or configuration that may be necessary each time you decide to scale-up your application.

Cross-cloud application deployment. No vendor lock-in. Cool. There is actually a Spanish startup called Besol that is building its entire (amazing) product “Tapp into the Cloud” on the management of cross-cloud application stacks, leveraging a rich library of Chef cookbook templates. And while I was writing this post on a flight from London, Jason Hoffman (@jasonh) was being interviewed by GigaOM and, while announcing a better integration between Joyent and Chef, he mentioned the compatibility between cloud environments as a major advantage of using Chef.

What we’re observing is a major shift from leveraging operating system images towards the adoption of automation layers that can quickly prepare for you whatever application you want your virtual server to host. That means that one of the major advantages introduced by virtualization technology, that is the software manipulation of OS images, one of the triggers of the rise of cloud computing, no longer matters.

Potentially, with the adoption of automation platforms like Chef, Puppet or CFEngine, service providers could build a complete cloud infrastructure service, without employing any kind of hypervisor. And this trend is further confirmed by facts like:

Of course there are still advantages for using a hypervisor, because certain applications require architectures made of many micro-instances for performing parallel computing, thus it’s still necessary to slice a server into many small portions. However, with the silicon processors increasing the number of cores and the ability of using threads, virtualization may not be so important anymore for the cloud.

In the end, I think we no longer can say that virtualization is the foundation of cloud computing. The correct statement could perhaps be that virtualization inspired cloud computing. But the future may leave even a smaller space for that.