What Is Enterprise Serverless?

Jason Smith Jason Smith

May 24, 2024

skyline-with-clouds-1.png

What is Enterprise Serverless?! It sounds like a marketing term. I often joke how “Enterprise” is like “AI” in the sense that people are just tacking those words onto their tech projects to make them sound more official or important. That’s not to say that Enterprise and AI are nonsencial topics, we just have a habit of overusing the terms and using them in areas where it is not necessary.

But I personally want to call out the concept of Enterprise Serverless. The reason being is that serverless has developed this reputation and misconception that it is for simple projects or “glue” between services but can’t be used for enterprise applications.

I whole heartedly disagree with this idea and I am going to use this blog post to explain why. I will also provide real world examples of enterprise-scale use of serverless technologies.

Quick Reminder of Serverless

I know I do this a lot but I find it important to understand what serverless technology is. I know that when I say “serverless” some people may immediately think that I am referring to Functions-as-a-Service or FaaS.

Serverless is more than FaaS. Serverless is a computing concept where infrastructure is abstracted away from developers, allowing them to focus on code, not deployment or maintenance. Serverless doesn’t mean that the code is just floating in some random ethereal form in a warehouse somewhere. The code and data are still on servers, it’s just that the server infrastructure is abstracted away.

So when we look at Enterprise Serverless, we will be looking at more than just FaaS. Sure, FaaS will be a part of it, but we will look at all forms of serverless.

Enterprise Serverless is the idea of building applications, services, and platforms using serverless technologies. These applications need to be able to reliably scale to ensure a good experience for your customers. You also want it to be reasonably simple for your developers to utilize.

Google, Possibly the Largest Enterprise Serverless Company.

Now a reminder, I am a Google employee so this may come across as biased. But I also want to point out that I will be as factual as possible.

In a previous post I talk a bit about the history of serverless and mention Google’s use of Borg. To recap, Borg is a containerization platform not unlike Docker and Kubernetes. In fact, Kubernetes is based on Borg. Borg was created for a few reasons. A very important reason was to simplify developer experience. Developers shouldn’t have to worry about infrastructure. They should be able to just deploy the binary and have it work. Borg helped make this happen. Almost everything at Google is a container deployed on Borg. Search? Borg. Gmail? Borg. Google Docs? Borg. YouTube? Borg.

Alphabet now being a $2T company it is fair to say that Google is the largest company that uses serverless technology at an enterprise scale.

It’s Not All About Google Though

Google is A company that uses Serverless at Enterprise-scale but it’s not the only company. Netflix has a “Cosmos” platform that leverages a microservices architecture that has asynchronous workflows and functions.

Coca-Cola is going serverless on AWS.

T-Mobile joined the Serverless world not too long ago.

L’Oreal has leveraged serverless compute in their ETL pipeline.

This isn’t to say that they have absolutely no legacy systems or are managing infrasucture. This just means that they have moved major workloads to a serverless platform. According to a DataDog “State of Serverless” Report, 70% of AWS, 60% of GCP, and 49% of Azure customers are using serverless technologies at some level. It’s safe to say that the majority of companies are serverless in some way.

Serverless Containers at Scale, the Next Evolution.

I still fully believe that containers are the best compute unit today. My main reason is that it allows you to bundle your runtime into the binary. Functions, currently cannot do that. Containers give you more flexibility and portability for workloads than VMs or Functions.

Now, that could very much change in the future. I recently wrote about how WASM is a potential competitor in my “The Cloud is Serverless” Newsletter. While FaaS is still widely used, I think containers will need to be the next evolution of serverless.

Two major OSS contributions have helped accelerate serverless containers. They are Knative and KEDA, created by Google and Microsoft respectively and now in the hands of the CNCF. Currently Knative APIs are used in Google Cloud Run and, as far as I can tell, Azure’s Container Apps utilizes KEDA APIs.

Outside of their creators, many company rely on these technologies. Here is a list of Knative Adopters and KEDA does as well.

There are of course other OSS Projects such as OpenWhisk, Fission, and OpenFaas that are contributing to serverless containers in ways. I fully intend to see maybe a few more come up. This is showing the interest in such a technology.

These platforms make it easy to scale containers in an automated manner so that developers don’t have to worry about infrastructure. If you are running your own Kubernetes clusters you will still be responsible for maintaining that, but you will also be able to divide duties more clearly. Developers are empowered to deploy applications on the platform and platform engineers can ensure that everything is running as expected behind the scenes.

As more enterprises adopt these technologies, we will see serverless exponentially grow.

So What Is Enterprise Serverless?

I talked a lot but let’s get to the meat here. Enterprise Serverless is the ability to build applications utilizing serverless compute runtimes (as well as other serverless options) that run production workloads. That’s a very basic definition but it works. A business should feel comfortable using serverless compute to run a large-scale production. It should go beyond MVPs, experiments, and tests.

THAT is what Enterprise Serverless is. There is a misconception that serverless simply cannot scale the way VMs or even Kubernetes can. I reject that notion as I believe serverless platforms can do that. It just depends on what you are using to build that platform. This is why I think containers are a great fit. By bundling your runtime into the binary, you have more flexibility in how you deploy the applications. Using Kubernetes with an abstraction layer on top of it (such as Knative or KEDA) can do just that.

FaaS had a great run as the primary serverless compute runtime but it’s opinionated runtimes limits what developers can do and can limit what the platform can do.

I think it’s time to look past the idea of serverless being limited to MVPs and experiments and start using it for large-scale enterprise applications.

Cover Image Credit to Philipp Birmes on Pexels