The New Cloud Management Wheel Is Here

This post originally appeared on the Gartner Blog Network.

If you ever wondered what cloud management means and what it encompasses, Gartner has the answer. The newest cloud management framework has just been published as part of the research “Solution Criteria for Cloud Management Tools” (paywall).

Cloud management is made of seven functional areas and five cross-functional attributes. The functional areas are specific to one use case, whereas the cross-functional attributes aim at broader goals that are common to multiple use cases. The outer ring of the “wheel” in the below figure below represents the functional areas, and the inner ring characterizes the cross-functional attributes.

The research also “double clicks” on each category and provides a total of 201 capabilities that organizations should possess to manage public and private clouds. These capabilities are presented in form of requirements, which can be used to evaluate and select cloud management tools. The research comes in form of a toolkit that clients can download and customize to power their request-for-proposals (RFPs) efforts.

Major updates to the research include:

  • Shift from platforms to tools: Although cloud management platforms (CMPs) are still out there, they’re no longer top of mind of clients according to our inquiries. In the last couple of years, we’ve observed the shift of the interest from broad general-purpose platforms to best-of-breed tools that have deeper functionality in a given area.
  • Addition of observability criteria: These days, observability is certainly stealing the spotlight in the monitoring space. We added observability capabilities and adopted the term as part of the category name “Monitoring and Observability.”
  • AI as cross-functional attribute: AI-powered analytics now touches several aspects of cloud management that we made it a cross-functional attribute (the middle ring of the wheel) in addition to the other four: automation, brokerage, governance and life cycle.

Often, organizations purchase a cloud management tool and implement their management strategy solely based on its available capabilities. With this research, we suggest the opposite approach. Define first what you need to manage and then select the tools that can provide you with the functionality you need.

You can access the full research at “Solution Criteria for Cloud Management Tools” (paywall). Should you want to discuss further, feel free to schedule an inquiry call with me by emailing inquiry@gartner.com or through your Gartner representative.

Follow me on Twitter (@meinardi) or connect with me on LinkedIn for further updates on my research. Looking forward to talking to you!

My Research on Cloud Cost Management and Optimization Is Now Available For Free!

This post originally appeared on the Gartner Blog Network.

I am proud to announce that my research on cloud cost management and optimization is now available for free at this link. Gartner made this research public to help organizations in this difficult moment of dealing with a global pandemic and economic recession. The research was selected because it speaks to pandemic-driven business priorities such as cloud adoption and cost optimization.

Gartner has been publishing guidance on managing costs of cloud IaaS and PaaS for the last few years. This practice continues to evolve due to new cloud provider capabilities, organizations increasing their cloud maturity and cloud services becoming more complex. Earlier this year, my colleague Traverse Clayton and I published the latest edition of our cost management framework (depicted in the figure below). This update has drawn a lot of interest from clients, because it helps organizations accelerating cloud adoption in a governed fashion, while unlocking cost savings and minimizing the risk of overspending.

The framework describes the technical capabilities that organizations must develop to manage cloud costs successfully. Our guidance has evolved to encompass new aspects of planning, tracking and optimizing public cloud costs on an ongoing basis. Examples of updates included in this edition are:

  • A clearer delineation between “Reduce” and “Optimize.” Reducing costs is about leveraging more cost-effective configurations without impacting the application architecture. These techniques include rightsizing, scheduling and programmatic discounts. Optimizing costs requires implementing architectural changes that drive costs down. For example, moving from compute instances to event-driven serverless function-as-a-service.
  • The addition of techniques to incentivize financial responsibility. Centralized IT does not want to be held accountable for the spend generated by architectural decisions made by other teams, such as application development and DevOps. Therefore, the framework includes more aspects that help “shift left” the budget accountability. These techniques include budget approvals, dedicated dashboards, cost optimization recommendations and the institution of “leader boards” that highlight the most disciplined cloud consumers.
  • The addition of the correlation of cloud costs with business value. Many digital business applications do not have steady budgets. Their cost often varies on the basis of the number of transactions or users that they handle. The framework helps identify business KPIs and calculate their ratio with cloud costs. Monitoring the trends of that ratio allows organizations to manage costs of applications that have variable demand, in relation to the value that organizations receive from cloud services. Furthermore, such approach allows for the measurement of the efficiency of the cloud cost management practice.

Read the complete cloud cost management and optimization research for free at this link. I hope you find it useful and I welcome your feedback at marco.meinardi@gartner.com. Should you also be a Gartner client wanting to discuss this topic in more details, you can schedule an inquiry call with me by emailing inquiry@gartner.com or through your Gartner representative.

Follow me on Twitter (@meinardi) or connect with me on LinkedIn for further updates on my research. Looking forward to talking to you!

A Comparison of Public Cloud Cost Optimization Tools is Now Available

This post originally appeared on the Gartner Blog Network.

If you’re using public cloud infrastructure and platform services, I bet that you’ve been thinking about adopting a tool to cut down costs. You’ve been told that there is some inherent waste in your cloud spending and you want to address that. I’m also sure that a number of vendors told you that they can help with that. But to what extent? If cost management and optimization are becoming “table stakes” in the cloud management market, there can be a huge difference in the capabilities of each available solution. Some solutions may just scratch the surface and simply report for underutilized instances – when CPU have been low in usage in the last week or so. But these solutions will leave it up to you to figure out the rest. Other solutions may automatically execute precise instance rightsizing across types, families and regions, using AI-based pattern recognition and ML inference.

Both cloud providers and third-party vendors have invested in developing cost optimization capabilities for public cloud services. In this complex scenario you may wonder: which tools will allow me to truly maximize savings while minimizing performance risks? The good news is that Gartner just published research to answer this exact question and it’s available on gartner.com right now.

My colleague Brian Adler and I have just published the following two research notes, both available behind paywall:

The notes provide a side-by-side comparison of each solution based on a common set of criteria. Examples of criteria include compute instance rightsizing, block storage rightsizing, unused resource decommissioning and reservation portfolio management. For each criterion, vendors have been scored with grades such as “Low”, “Medium” or “High”.

Gartner clients can use the two research notes to understand what you can do using cloud providers’ native tools and which gaps you can fill with third-party tools. Furthermore, clients can use the provided criteria to assess the capabilities of any other public cloud cost optimization tool that hasn’t been included in this research.

This research is part of a series of Solution Comparisons that we published to assess tools in various areas of the Gartner cloud management wheel. Read the full research notes if you want to know the results of this comparative assessment. You can also schedule an inquiry call (inquiry@gartner.com) with myself or my colleague Brian Adler if you want to have private conversations about our research findings. In case you don’t have access to this research and you’d like to, I’m sure your Gartner representative will be more than happy to help.

Lastly, feel free to follow me on Twitter (@meinardi) or connect with me on LinkedIn for further updates on my research. Looking forward to talking to you!

AWS Just Made Their Management Tools Ready for Multicloud

This post originally appeared on the Gartner Blog Network.

I am just back home after spending last week at AWS re:Invent in tiresome, noisy, vibrant and excessive Las Vegas. At Gartner, I cover cloud management and governance and I was disappointed not to hear much about it in any of the keynotes. I get it, management can be sometimes perceived as a boring necessity. However, it is also opportunity to make a cloud platform simpler. And that’s something that AWS needs. Badly.

Despite the absence of highlights in the keynotes, I spotted something interesting while digging through the myriad of November announcements. What apparently got lost in the re:Invent noise is that AWS is opening up some of their key management tools to support resources outside of the AWS cloud. Specifically, AWS CloudFormation and AWS Config now support third-party resources. And that’s a big deal.

The Lost Announcements

The CloudFormation announcement reports that AWS has changed the tool’s architecture to implement resource providers, much in line with what Hashicorp Terraform is also doing. Each resource provider is an independent piece of code that enables support in CloudFormation for a specific resource type and API. A resource provider can be developed independently from CloudFormation itself and by nonAWS developers.

AWS plans to promote resource providers through the open source model and has certainly the ability to grow a healthy community around them. The announcement also says that a number of resource providers will be shortly available for third-party solutions. Upcoming solutions include AtlassianDatadogDensifyDynatraceFortinetNew Relic and Spotinst. AWS is implementing this capability also for native AWS resources such as EC2 instances or S3 buckets, hinting that this capability may not be just an exception, but a major architectural change.

In the same way, AWS Config now also supports third-party resources. The same resource providers used by CloudFormation enable AWS Config to manage inventory, but also define rules to check for compliance and create conformation packs (a.k.a. collections of rules). All of this also for nonAWS resources.

Why is This a Big Deal?

With this launch, AWS addresses one of the major shortcomings of its management tools: being limited to a single platform – the AWS cloud. From today, anyone could develop resource providers for Microsoft Azure or Google Cloud Platform resources. This possibility makes AWS CloudFormation and AWS Config de facto ready to become multicloud management tools. And we all know what AWS thinks about multicloud, don’t we?

Furthermore, AWS is now challenging the third-party management market, at least within the provisioning and orchestration, inventory and classification and governance domains (see this Gartner framework for reference). AWS CloudFormation now incorporates more capabilities of HashiCorp Terraform. It also can be used to model and execute complex orchestration workflows that organizations normally handle with platforms like ServiceNow. AWS Config can now aim to become a universal CMDB that can keep track of resource inventory and configuration history from anywhere.

Both AWS CloudFormation and AWS Config are widely-adopted tools. Customers could be incented to extend their use beyond AWS instead of selecting a new third-party tool that would require a new contract to sign and new vendor to manage. Does this mean that AWS has issued a death sentence to the third-party management market that makes much of its ecosystem? Certainly not. But these announcements speak to the greater ambition of AWS and will force third-party vendors to find new ways to continue to add value in the long term. Maybe the resource provider ecosystem will not develop, and customers will continue to prefer independent management vendors. Or maybe not.

In conclusion, it was disappointing not to hear this message loud and clear at re:Invent this year, especially compared to the amount of noise we heard around the launches of Google Anthos and Azure Arc. But there is certainly a trend for which all the major providers are preparing their management tools to stretch out of their respective domains. How far they want to go is yet to be determined.

Evaluating Cloud Management Platforms and Tools With The Gartner Toolkit

This post originally appears on the Gartner Blog Network.

After several months of work, hundreds of customer calls and tens of vendor briefings, it’s finally out there: the Gartner’s “Evaluation Criteria for Cloud Management Platforms and Tools” has just published and is now available to Gartner clients. The research (which is available behind paywall at this link) contains 215 evaluation criteria divided into eight categories and four additional attributes (see the figure below). Gartner clients can use this research to assess cloud management vendor solutions and determine which areas of management they cover. Furthermore, clients will be able to compare the results of the assessments to select the cloud management platforms (CMPs) and tools that best align to their requirements.

The eight categories above serve as the primary scope for each criterion. The four attributes serve as additional scope and they apply to criteria across all eight categories. For example, the “provisioning policies” criterion belongs to the “Provisioning and Orchestration” category, but it’s also tagged with the “Governance” and “Life Cycle” attributes. This bidimensional classification is the result of the type of questions we receive from clients and that we want to answer with this research. For instance, clients often ask “what are the functions required to manage cloud costs?”, but also “how do I evaluate cloud governance tools?”. The approach we’ve taken will give clients the ability to quickly identify criteria from multiple overlapping perspectives.

Furthermore, all categories present a breakdown into “Required”, “Preferred” and “Optional”. This further classification is based on what Gartner thinks should be required for an enterprise-grade solution. However, clients are encouraged to tailor the evaluation criteria research with what they consider important for their organization. To do this, the research comes with an attached editable spreadsheet that clients can manipulate to prepare a tailored version of the evaluation criteria to support their RFI/RFP efforts.

Because CMPs on the market tend to provide a set of functions that differ based on the chosen cloud platform, clients should use this research to run a separate assessment for each of the cloud platforms they intend to use. For example, a CMP may support Amazon CloudWatch but not Azure Monitor as data source. Therefore, the CMP should be scored as “Yes” for AWS and “No” for Microsoft Azure with respect to the “Cloud-platform-native monitoring integration” criterion.

The wheel in the above figure has evolved a bit since the version of my previous post. However, that has been a necessary step to take as we dove into the actual requirements beneath each category. We are happy with the results of this research and we’re confident that Gartner clients will be as well. We encourage all clients to use the Evaluation Criteria for Cloud Management Platforms and Tools and share their feedback for future improvement or refinement.

To engage with me, feel free to schedule inquiry call (inquiry@gartner.com), follow me on Twitter (@meinardi) or connect with me on LinkedIn. Looking forward to talking to you!

Upcoming Research: Cloud Management Platforms

This post originally appeared on the Gartner Blog Network.

At Gartner, we’re often asked how to select cloud management platforms (CMP). We’ve been asked that question in the past, when a CMP was the software to transform virtualized data centers into API endpoints. We’re being asked the same question today, when a CMP is used to manage public clouds.

At Gartner Catalyst 2017 – one of the largest gathering of technical professionals – in one of my presentations I remarked how confused the market is. Even vendors don’t know whether they should call their product a CMP or not. In the last few years, the cloud management market has rapidly evolved. Public cloud providers have constantly released more native management tools. Organizations have continued to adopt public cloud services and have gradually abandoned the idea of building a cloud themselves. Public cloud services require the adoption of new processes and new tooling, such as in the areas of self-service enablement, governance and cost management. Finally, public cloud has to co-exist and co-operate with on-premises data centers in hybrid scenarios.

At Gartner, we’re committed to help our client organizations define their processes, translate them into management requirements and map them to market-leading tools. With the public cloud market maturing and playing a key role in the future of IT, we are now seeing the opportunity to make clarity and defining the functions that a CMP must provide.

My colleague Alan Waite and I are drafting the Evaluation Criteria (EC) for Cloud Management Platforms. An EC is a Gartner for Technical Professionals (GTP) branded research that lists the technical criteria of a specific technology and classifies them as required, preferred and optional. Clients can take an EC and use it to assess a vendor’s technical functionality. They can use it to form the basis for an RFP or even to simply define their management requirements. With our upcoming EC for CMPs, client organizations will be able to shed light on the confused cloud management market. They will be able to understand which tools to use for which management tasks and how to compare them against one another.

I’m bullish about the outcome of this research as I’m so looking forward to its publication. I’m extremely thankful to all the extended analyst community at Gartner who’s collaborating with me and Alan to increase the quality of this important piece of research. If you’re an existing Gartner client, don’t forget to track the “Cloud Computing” key initiative to be notified about the publication. if you’re willing to contribute to this research, feel free to schedule an inquiry or a vendor briefing with myself or Alan. Looking forward to the next update. Stay tuned!

Serverless, Servers and Cloud Management at AWS re:Invent 2017

This post originally appeared in the Gartner Blog Network.

In the last few days, the press has been dominated by countless interpretations of the myriad of AWS re:Invent announcements. Every article I read was trying (hard) to extract some kind of trend or direction from the overall conference. However, it was simply succeeding in providing a single and narrow perspective. AWS have simply tried to position itself as the “everything IT” (as my colleague Lydia Leong said in a tweet). With so many announcements (61, according to AWS), across so many area and in such a short time, it is extremely difficult for anyone to understand their impact without a more thorough analysis.

However, I won’t refrain from giving you also my own perspective, noting down a couple of things that stood out for me.

Serverless took the driver’s seat across the conference, no doubt. But servers did not move back into the trunk as you’d have expected. Lambda got a number of incremental updates. New services went serverless, such as Fargate (containers without the need to manage the orchestrator cluster) and the Aurora database. Finally, Amazon is headed to deliver platform as a service as it should’ve been from day one. A fully multi-tenant abstraction layer that handles your code, and that you pay only when your code is running.

However, we also heard about Nitro, a new lightweight hypervisor that can deliver near-metal performance. Amazon also announced bare-metal instances. These two innovations have been developed to attract more of the humongous number of workloads out there, which still require traditional servers to run. When the future seems to be going serverless, server innovation is still relevant. Why? Because by lowering the hypervisor’s overhead, Nitro can lead to better node density, better utilization and ultimately cost benefits for end users.

With regard to my main area of research, I was not impressed that only a couple of announcements were related to cloud management. Amazon announced an incremental update to CloudTrail (related to Lambda again, by the way) and the expansion of Systems Manager to support more AWS services. Systems Manager is absolutely one step towards what should be a more integrated cloud management experience. However (disclaimer: I’ve not seen it in action yet), my first impression is that it still focuses only on gaining (some) visibility and on automating (some) operational tasks. It’s yet another tool that needs integration with many others.

My cloud management conversations with clients tell me that organizations are struggling to manage and operate their workloads in the public cloud, especially when these live in conjunction with their existing processes and environments. Amazon needs to do more in this space to feel less like just-another-technology-silo and deliver a more unified management experience.

When both Andy Jassy or Werner Vogels were asked about multicloud, they both denied it. They said that most organizations stick with one primary provider for the great majority of their workloads. The reason? Because organizations don’t accept working at the least common denominator (LCD) between providers. Nor they want to become fluent in multiple APIs.

The reality is that multicloud doesn’t necessarily mean having to accept the LCD. Multicloud doesn’t imply having a cloud management platform (CMP) for each and every management task. It doesn’t imply having to make each and every workload portable. The LCD between providers would be indeed too much of a constraint for anyone adopting public cloud services.

On the contrary, we see that many organizations are willing to learn how to operate multiple providers. They want to do that to be able to place their workloads where it makes most sense, but also as a risk mitigation technique. In case they will ever be forced to exit one providers, they want to be ready to transfer their workloads to another one (obviously, with a certain degree of effort). Nobody wants to be constrained to work at the LCD level, but this is not a good excuse to stay single-cloud.

Amazon continues to innovate at an incredible pace, which seems to accelerate every year. AWS re:Invent 2017 was no exception. Now, organizations have more cloud services to support their business. But they also have many more choices to make. Picking the right combination of cloud services and tools is becoming a real challenge for organizations. Will Amazon do something about it? Or shall we expect hundreds of more service announcements at re:Invent 2018?

New Research: How To Manage Public Cloud Costs on Amazon Web Services and Microsoft Azure

This post originally appeared on the Gartner Blog Network.

Today, I am proud to announce that I just published new research (available here) on how to manage public IaaS and PaaS cloud costs on AWS and Microsoft Azure. The research illustrates a multicloud governance framework that organizations can use to successfully plan, track and optimize cloud spending on an ongoing basis. The note also provides a comprehensive list of cloud providers’ native tools that can be leveraged to implement each step of the framework.

In the last 12 months of client inquiries, I felt a remarkable enthusiasm for public cloud services. Every organization I talked to was at some stage of public cloud adoption. Almost nobody was asking me “if” they should adopt cloud services but only “how” and “how fast”. However, these conversations also showed that only few organizations had realized the cost implications of public cloud.

In the data center, organizations were often over-architecting their deployments in order to maximize the return-on-investment of their hardware platforms. These platforms were refreshed every three-to-five years and sized to serve the maximum expected workload demand over that time frame. The cloud reverses this paradigm and demands that organizations size their deployment much more precisely or they’ll quickly run into overspending.

Futhermore, cloud providers price lists, pricing models, discounts and billing mechanisms can be complex to manage even for mature cloud users. Understanding the most cost-effective option to run certain workloads is a management challenge that organizations are often unprepared to address.

Using this framework will help you take control of your public cloud costs. It will make your organization achieve operational excellence in cost management and realize many of the promised cost benefits of public cloud.

The Gartner’s framework for cost management comprises five main steps:

  • Plan: Create a forecast to set spending expectations.
  • Track: Observe your actual cloud spending and compare it with your budget to detect anomalies before they become a surprise.
  • Reduce: Quickly eliminate resources that waste cloud spending.
  • Optimize: Leverage the provider’s discount models and optimize your workload for cost.
  • Mature: Improve and expand your cost management processes on a continual basis.

If you recognize yourself in the above challenges, this new research note is an absolute recommended read. For a comprehensive description of the framework and the correspondent mapping of AWS and Microsoft Azure cost management tools, see “How To Manage Public Cloud Costs on AWS and Microsoft Azure”.

Why developers won’t go straight to the source

I’m so excited. On last Wednesday Flexiant has announced the acquisition of the Tapp technology platform and business. I met the guys behind it quite a while ago and I have never refrained from remarking how great their technology is (see here). I recognized a trend in their way of addressing the cloud management problem and I’m so glad to be part of, right now.

Disclaimer. I am currently working for Flexiant as Vice President Products. I have endorsed this acquisition and I am fully behind the reasons and convinced of the potential of it. This is my personal blog and whatever you read here has not been agreed with my employer in advance and therefore it represents my very personal opinion.

Right after the acquisition (read more about it here) we’ve heard tremendous noise on social networks and the press. David Meyer (@superglaze) of GigaOm in particular wrote up a few interesting comments and he picked up well the reasoning behind it, but he also ended the article with an open question:

“This [the Tapp technology platform] would help such players [Service Providers] appeal to certain developers that are currently just heading straight for EC2 or Google.
 
Of course, this is ultimately the challenge for the likes of Flexiant – can anything stop those developers going straight to the source? That question remains unanswered.”

Well, I’d like to answer that question and say why I’m actually convinced there is a lot of value to add for multi-cloud managers.

Much has been written these days from the business side of the acquisition and I don’t have anything meaningful to add. Instead, I would like to raise a few interesting points from a technology point of view (that’s my job, after all) and unveil those values that are maybe not so obvious at the first sight.

Multi-cloud management

Multi-cloud management per se has a very large meaning spectrum. There are multi-cloud managers brokerage, therefore primarily on getting you the best deal out there. Despite this is a good example about how to provide a “multi-cloud” value, I’m still wondering how they can actually find a way to compare apples with oranges. In fact, cloud infrastructure service offerings are so different and heterogeneous that being simply a cloud broker will make it extremely difficult to succeed, deliver real value and differentiate. So, point number one: Tapp isn’t a cloud brokerage technology platform.

Other multi-cloud managers deliver value by adding a management layer on top of existing cloud infrastructures. This management layer may be focused on specific verticals like scaling Internet applications (e.g. Rightscale) or providing enterprise governance (e.g. Enstratius, now Dell Multicloud Manager). By choosing a vertical, they can address specific requirements, cut off the unnecessary stuff from the general purpose cloud provider and enhance the user experience of very specific use cases. That’s indeed a fair point but not yet what Tapp is all about.

So why, when using Tapp, developers won’t “go straight to the source”? Well, first of all, let’s make clear that developers are already at the source. In fact, to use any multi-cloud manager you need an AWS account or a Rackspace account (or any other supported provider account). You need to configure your API keys in order to enable to communication with the cloud provider of choice. So if someone is using your multi-cloud manager, it means that he prefers it over the management layer provided by the “the source”.

The cloud provider lock-in

One of the reasons behind Amazon’s success is the large portfolio of services they rolled out. They’re all services that can be put together by end users to build applications, letting developers focus just on their core business logic, without worrying too much about queuing, notifying, load balancing, scaling or monitoring. However, whenever you use one of the tools like ELB, Route53, CloudWatch or DynamoDB you’re locking yourself into Amazon. The more you use multi-tenant proprietary services that exist only on a specific provider, you won’t be able to easily migrate your application away.

You may claim to be “happy” to be locked in a vendor who’s actually solving your problems so well, but there are a lot of good reasons (“Why Cloud Lock-in is a Bad Idea“) to avoid vendor lock-in as a principle. Many times, this is one of the first requirements of those enterprises that everyone is trying to attract into the cloud.

Deploying the complete application toolkit

Imagine there could be a way to replicate those services onto another cloud provider by building them up from ground up on top of some virtual servers. Imagine this could be done by a management layer, on demand, on your cloud infrastructure of choice. Imagine you could consume and control those services using always the same API. That would enable your application to be deployed in a consistent manner across multiple clouds, exclusively relying on the possibility to spin up some virtual servers, which you can find in every cloud infrastructure provider.

This is what Tapp is about. And the advantages of doing that are not trivial, these include:

1. Independency, consistency and compatibility

This is the obvious one. For instance, a user can click a button to deploy an application on Rackspace and another button to deploy a DNS manager and a load balancer. These two would provide an API that is directly integrated into the control panel and therefore consumable as-a-service. Now, the exact same thing can be also done on Amazon, Azure, Joyent or any other supported provider, obtaining the exact same result. Cloud providers became suddenly compatible.

2. Extra geographical reach

Let’s say you like Joyent but you want to deploy your application closer to a part of your user base that lives where Joyent doesn’t have a data center. But look, Amazon has one there and, despite you don’t like its pricing, you’re ready to make an exception to gain some latency advantages to serve your user base. If your application is using some of the Joyent proprietary tools, it would be extremely difficult to replicate it on Amazon. Instead, if you could deploy the whole toolkit using just some EC2 instances, then it all becomes possible.

3. Software-as-a-(single)-tenant

If multi-tenancy has been considered as a key point of Cloud Computing, I started to believe that maybe as long as an end user can consume an application as-a-service, who cares if it’s multi-tenant or single-tenant.

If you can deploy a database in a few clicks and have your connector as a result, does it really matter if this database is also hosting other customers or not? Actually, single-tenancy would become the preferred option1 as he would not have to be worried about isolation from other customers, noisy neighbors, et al. Tony Lucas (@tonylucas) wrote about this before on the Flexiant blog and I think he’s spot on, there is a “third” way and that’s what I think it’s going mainstream.

The Tapp’s way

The Tapp technology platform was built to provide all of that. A large set of application-centric tools, features and functions2 that can be deployed across multiple clouds and consumed as-a-service.

Of course it’s not just about tools. It’s also about the application core, whatever it is. The Tapp technology solves also that consistency problem by pushing the application deployment and configuration into some Chef recipes, as opposed to cloud provider-specific OS images or templates3. Every time you run those recipes you get the same result, in any cloud provider. In fact, to deploy your application you’ll just need the availability of vanilla OS images, like Ubuntu 14.04 or Windows 2012 R2 that, honestly, are offered by any cloud provider.

All those end users who want to deploy applications without feeling locked in a specific providers, today had only one way of doing it: DIY (“do-it-yourself”). They would have to maintain and operate OS images, load balancers, DNS servers, monitors, auto-scalers, etc. That’s a burden that, most of the time, they’re not ready to take. They don’t want to spend time deploying all those services that end up being all the same, all the time. Tapp takes away that burden from them. It deploys applications and service toolkits in an automated fashion and provides users just with the API to control them. And this API is consistent, independently from the chosen cloud provider. This is the key value that, I believe, will prevent developers from going straight to the source.

1. Multi-tenancy would be the preferred option for the Service Provider because this would translate into economies of scale. However, economies of scale often obtain cost optimisation and end user price reduction and, therefore, it can be considered an indirect advantage for end customers as well.

2. Tapp features include: application blueprinting with Chef, geo-DNS management and load balancing, networking load balancing, auto-scaling based on application performance, application monitoring, object storage and FDN (file delivery network).

3. It worths mentioning that pushing the deployment of application into configuration management tools like Chef or Puppet significantly affects the deployment time. That’s why it’s strongly advised to find the optimal balance between what is built-in the OS image and what is left to the configuration management tool.

Why I picked Flexiant as my next challenge

Dear all, I am really happy and proud to announce that I am joining the Flexiant team starting this week. In the last few years, Flexiant has been building a stunning Cloud Management Platform with the goal of enabling service providers to join cloud space in few easy steps, and with the possibility to still highly differentiate their service.

The cloud infrastructure market landscape is but in its final configuration and I have the ambition to actively contribute to how it will look like in the next few years. I am joining Flexiant in a moment when the cloud industry is facing a terrific growth, with just a bunch of players out there, still immature technologies, vendors struggling to adapt their business model and a general misperception around cloud services. There is plenty of work to do!

But let me give you a little bit more of insights about why I have picked Flexiant and what great things I think we can do together.

A differentiated cloud service

I enjoyed observing the recent signs of a required differentiation in the cloud infrastructure market. After a large consensus around certain technologies, with such a big (and growing) market to conquer, competition is getting tougher, as more players try to come onboard everyday. Although price initially appears as the main competition driver, considering the impressive cloud services portfolio of Amazon Web Services, highly differentiated service offerings will be required for those who seriously aim at competing against the giant.

Why would I want to compete with that giant? Can it be enough for me to offer some complementary service in order to exploit the market reach of Amazon, instead of going against it? Well, we all know the consequences of a unique-player dominated market, we’ve seen it before (Microsoft, Oracle, etc.) and we all can concur that during those times innovation has been slower than ever, with the abuse of dominant positions that negatively affected the customer experience. The opportunity out there is big and I don’t think we want to leave the entire market to one player again, do we? And if the goals of the cloud is to commoditize technology by offering it as-a-service, it’s right there, on the service side, that there is need and opportunity to innovate.

Recently we have had a concrete proof of this need for differentiation. The acquisition of Enstratius by Dell was driven by the need for a highly differentiated cloud service that fills the gap between commodity infrastructures and enterprise requirements. I was lucky enough to have the opportunity to work with the Enstratius team and I can tell they were winning deals whenever it was about governance and compliance, all typical enterprise requirements. But the real news there was Dell dropping its previously announced OpenStack-powered cloud service, something that will never come to life instead. All those players betting on OpenStack wanted to make it the industry standard for building cloud infrastructure and now what? They suddenly remembered they have to compete with each other. And the imperative is: differentiate!

On this matter, our own Tony Lucas (@tonylucas), European pioneer of cloud services and SVP product at Flexiant (if you don’t believe check out this video of Tony talking about cloud with Jeff Barr of AWS back in 2007), has written an extensive White Paper where he scientifically goes through why cloud federation is not the optimal model for competing in the IaaS market, with differentiation as the winning alternative. Beside suggesting everyone in this industry to read it carefully, it reminded me of the biggest failure of cloud federation we have just recently witnessed: vCloud providers. The launch of VMware hybrid cloud service is the clear demonstration that federating providers with the same technology but different cultures, goals and SLAs, does not work. It can be a short term opportunity for the “federatable” cloud software vendor, but a secure failure on the mid-long term. Read Tony’s to understand exactly why.

A matching vision

For those who know me, I am a public cloud only believer. “Private cloud” was just a name given legacy vendors who didn’t want to give up on their on-premises business while having the opportunity to exploit the marketing hype and sell extra stuff to their rich customers. “Hybrid cloud” is how we are naming the period it takes to complete the journey to the public cloud.

Again, the most recent moves of the big guys confirm that public cloud is the way to go. Legacy software vendors are trying to convert themselves into service providers, mostly by acquiring companies rather than innovating from inside (e.g. yesterday’s news on IBM multi-billion acquisition of SoftLayer). So should we foresee a public cloud market dominated by AWS and challenged only by few other big whales? I don’t think so. If AWS really “gets” the cloud, the internal cultural conversion needed within traditional vendors will be painful and won’t really bring anything substantial at least for the next 3 to 5 years. Their current size and the internal resistance to give up on recurring revenue derived from on-premises business, will not let them be a real challenge to AWS in the near term. Instead, small, agile, highly innovative and differentiated niche players are those which will eventually contribute defining the next cloud infrastructure market landscape.

For more scientific evidence of why public clouds will take over the world, I can suggest another brilliant read by Alex Bligh (@alexbligh), the Internet rock star who has been behind Nominet (the UK domain registry) and currently CTO at Flexiant. His detailed methodical analysis led him to a conclusion:

And [so] will be for cloud computing: it’s not the technology that matters per se, it’s the consequent effect on economics. Private cloud is in essence an attempt to use cloud’s technology without gaining any of the efficiencies. It is for service providers to educate their customers and prospects, and the audience will often be financial or strategic as opposed to technical.

Alex Bligh, CTO at Flexiant

An enthusiastic choice

Visionaries like Tony and Alex, a mature product like Flexiant Cloud Orchestrator and the guidance and business savviness of our CEO George Knox (@GeorgeKnox) are all ingredients that will eventually lead to making some real difference in the coming months. Finding myself aligned to the company vision and culture, I am really enthusiastic to be on board and I foresee big things ahead of us. Stay tuned and ping me if you want to know more about Flexiant!

ABOUT FLEXIANT

Flexiant is a leading international provider of cloud orchestration software for on-demand, fully automated provisioning of cloud services. Headquartered in Europe, Flexiant’s cloud management software gives cloud service providers’ business agility, freedom and flexibility to scale, deploy and configure cloud servers, simply and cost-effectively. Vendor agnostic and supporting multiple hypervisors, Flexiant Cloud Orchestrator is a cloud management software suite that is service provider ready, enabling cloud service provisioning through to granular metering, billing and reseller whitelabel capabilities. Used by over one hundred organizations worldwide, from hosting providers, large MSPs and telcos, Flexiant Cloud Orchestrator is simple to understand, simple to deploy and simple to use. Flexiant was named a ‘Gartner Cool Vendor’ in Cloud Management, received the Info-Tech Research Group Trendsetter Award and called an industry double threat by 451 Group. Flexiant customers include ALVEA Services, FP7 Consortium, IS Group, ITEX, and NetGroup. Visit www.flexiant.com.