With 2019 coming to an end, it has become clear that Hybrid Cloud and Muli-Cloud have become the buzzwords of the year.
It wasn’t that long ago that we were talking about getting everyone onto the cloud and now we are talking about bringing the cloud to their datacenters. The cloud was supposed to promise endless scaling, new technology and innovation, etc. Are we going backwards or is there something more happening?
Let’s start by talking a bit about the “state” of the cloud today. An IDG report from 2018 stated that 73% of all enterprises have at least one application in the cloud. An additional 17% plan on moving part of their workload into the cloud.
Now in the same report, 44% of organizations are multi-cloud. For the purposes of level-settings, multi-cloud means having workloads on more than one public cloud provider. Now this kind of makes sense. Many people will often start their migration story on one vendor. As they start moving more of their workloads to the cloud some vendors decide to continue working with vendor A but other times they decide to move a workload to vendor B or C.
Now on the flip side, we have this thing called “Hybrid Cloud”. To level-set again, “Hybrid Cloud” is running workloads in the cloud and on-premises (private cloud). RightScale did a study showing that maybe 72% of Enterprises are use private cloud.
This provided a unique opportunity for integrators. Many Infrastructure-as-code providers filled in the gap created to help organizations find a way to both manage the multiple environments and enable connection between the environments. While this did help fill in a gap it did not address the problem that is opinionated vendor lock-in.
Every vendor had it’s own opinionated way to execute on common standards. Something as common as VMs operate differently depending on whether we are talking about VMWare, OpenShift, OpenStack, AWS, Azure, GCP, etc. This is where the problem with Multi/Hybrid Cloud begins to take hold.
For organizations, they are usually having to hire and train staff who understand specific vendor platforms. Sure you can find tooling to simplify it, but the same tooling usually has to have multiple configurations to handle each opionated platform.
Now there has been several attempts to create a standardized format over the years. Containers were supposed to help and while the technology has taken off, we have a variety of options such as layers, buildpack, containers, etc. Kubernetes essentially won the orchestration wars but just like Linux, every vendor has it’s own “distro” of Kubernetes.
So where are we today? The strategy that the major vendors have attempted is to bring their opinionated platform to the consumer. This is why Hybrid/Multi Cloud has become a buzzword. Rather than offering tooling and/or opinions to connect clouds, they will just bring their cloud to you with some additional tooling.
Right now the major players are AWS Outposts, Azure Arc, Google Cloud Anthos, VMWare Tanzu and Cloud Foundation.
Each has their own opinion on how to bring their public cloud to the datacenter and other clouds. We are still seeing opinionated computing but are now able to bundle the offering and deploy.
From a strategy perspective, it makes perfect sense. After all, cloud computing is a consumption based business. You need people on your platform using resources in order to realize revenue. If there are workloads on other clouds, private or public, that is a lost opportunity.
The obvious concern is that we go backwards in erms of “openness”. Innovative people worked to find ways to fill the gap between cloud vendors and their proprietary platforming. This brought us tools like Kubernetes, Chef, Jenkins, just to name a few (really the tip of the iceberg). Could this move cause us to regress? Is having a black box deployed across platforms the way to go?
I think it could be if done right. It is said that Kubernetes is the “Linux of the Cloud”. In Linux land, we saw many different Distros. You could get a fully supported Enterprise version from Red Hat, Suse, or Ubuntu LTS, or you could run a popular OSS version like CentOS or Debian or go deep into the weeds with Gentoo or anything you can find on DistroWatch. The benefit is that at the end of the day, Linux is Linux and the platform was the same exeprience regardless of the path.
Kubernetes could easily allow this as well. Enterprises and hobbyists alike could create their own “Kubernetes Distribution” and appliciations living on them deploy the same regardless. Yes, you would get an opionated configuration, but the actual experience will largely be the same.
Some vendors are choosing this path but others are just extending their current black box to other clouds. Which one is the better approach? Well as a fan of OSS, I would prefer Kubernetes, but I guess the market will speak.
As we go into 2020, I am excited to see what will happen in the world of Multi/Hybrid Cloud world. It seems like 2019 was the “announcement year” and 2020 and beyond will be the “practice year”. Stay tuned for all the announcements and let me know your experience with your platform.