A Brief History Of Serverless

Jason Smith Jason Smith

March 20, 2024

clock-imgs-1.jpg

Cover Image Credit to Bich Tran on Pexels

Serverless computing is becoming increasingly popular as organizations continue to look for ways to reduce costs, increase developer productivity, and improve time-to-market. But what is serverless computing, and how did it come to be? Today I will be exploring a condensed history of serverless computing.

Defining Serverless Computing

Before we talk about the history of serverless computing, let’s first define it.

Wikipedia defines serverless computing as

“a cloud computing execution model in which the cloud provider allocates machine resources on demand, taking care of the servers on behalf of their customers. “Serverless” is a misnomer in the sense that servers are still used by cloud service providers to execute code for developers. However, developers of serverless applications are not concerned with capacity planning, configuration, management, maintenance, fault tolerance, or scaling of containers, VMs, or physical servers.”

I agree with the general definition but here is my one sentence definition.

Serverless computing is a way to run code without worrying about infrastructure.

Your developers simply write code and rely on a third-party provider to take care of all the infrastructure details. The third-party often provides an abstraction layer that obfuscates the underlying deployment mechanisms.

It is important to define “serverless” because we don’t want to simply talk about a suite of products but rather how this idea of server obfuscation came to be.

Serverless Computing is Older than You Think

Serverless computing is not a new concept. In fact, its history (and really cloud computing’s history) can be traced back to the 1960s.

Computing was VERY different back then. Home computers wouldn’t come around for another decade and it’d be even longer before the idea became mainstream. The Defense Advanced Research Projects Agency (DARPA) created Project MAC, the first time-sharing system.

Time-Sharing was a system that allowed users to share the resouces of a central computer (maintframe). You would use terminals to access the system and submit workloads to it. A strong argument can be made that this is the first example of serverless computing based on our aforementioned definition. As an end-user, you didn’t own the servers and therefore not responsible for their management; you simply used them.

While computing certainly pre-dates the 1960s, one could argue that time-sharing was among the first examples of developers just focusing on using compute resources without worry about the management.

Internet and the Cloud

You may wonder why I am talking about the Cloud while going over the history of serverless computing. Realistically, you can’t separate the two. I fully believe that the end state of cloud computing will be serverless computing in some form. It only makes sense to include Cloud Computing in this discussion.

Cloud Computing was first mentinoned around 1994 by General Magic. Co-Founder and Co-CEO of General Magic, Andy Hertzfeld was speaking about Telescript when he said the following:

“The beauty of Telescript is that now, instead of just having a device to program, we now have the entire Cloud out there, where a single program can go and travel to many different sources of information and create sort of a virtual service.”

At this point in time, the interenet was still relatively new and managed web hosting was still a new industry so it would be a while before we’d see Cloud Computing.

The Cloud as we know it didn’t come really start to take form until the 2000s with Amazon. The first iteration was called Amazon.com Web Services which launched in 2002. The main idea was to create an “eCommerce-as-a-service” platform that allowed developers to to bring Amazon’s e-commerce tools to their own websites.

Amazon was a bit surprised about how well received this service was by developers. Concurrently, they were trying to find ways to improve developer productivity in their own datacenters and fundamentally changed how their developers and operators worked. In 2003, Andy Jassy took the lead in finding ways to improve these services.

In 2006, Amazon launched EC2 and S3 which was the foundation of the first major cloud platform, AWS. Amazon decided to essentially provide their users with storage and virtual machines to operate. They had excess servers in their datacenters and saw this as an opportunity to make some extra money.

This was the start of Infrastucture-as-a-Service, better known as IaaS. In this industry sector, a cloud provider will effectively lease out server time to customers. In the earliest days, it was limited to virtual machines and storage.

Platform-as-a-Service (PaaS) aka the Serverless Beta Release

AWS made the decision to focus on IaaS. Given this time in history, it made perfect sense. VMWare had essentially revolutionized how we used and thought of virtual machines and they started to become more mainstream in datacenters.

However, other companies had other thoughts. Fun fact, did you know that Canon could have been a major cloud player? They created Zimki which is often cited as the first Platform-as-a-Service or PaaS platform.

Now what is PaaS? To put it simply, it tried to help developers focus more on building apps and less on maintaining servers. Afterall, at the end of the day, managing a VM is essentially just managing a server and eveyrthing that comes with that (minus actual hardware management). PaaS allowed developers to focus on pushing the applications without worrying about underlying infrastructure.

Now Zimki eventually collapsed however others, such as Heroku, entered the space. There were other competitiors out there but let’s hone in on two in particular.

In 2008, Google launched AppEngine. This product predates the formal existence of Google Cloud and can be considered Google Cloud’s first offering.

Google had a very different philosophy about how developers would want to use the cloud. Google believed that developers didn’t care about infrastructure and just wantes a place to host their code and have it scale reliably. This idea didn’t just come from the aether but instead reflected what they were don’t with their own engineers.

Internally, Google used a platform called Borg which is still used by Google to this day. It also served as the basis for Kubernetes. Borg is a container-based platform whose goal was to allow developers to focus on code, not infrastructure. Google has an entire infrastructure team to manage the datacenters. This system came out circa 2004. This predates the advent of modern OCI Containers by about a decade.

Interestingly enough, modern containers were created by another PaaS provider at the time, DotCloud. DotCloud doesn’t exist anymore. Well it doesn’t exist under that name. You may know it as Docker.

Docker introduced their new container technology at PyCon 2013. At this time, Docker containers were just a wrapper for Linux Containers but this fundamentally changed the landscape of computing (more on this later).

Anyhoo, PaaS can be seen as a precursor to modern serverless. It focused on code, not infrastructure. PaaS often utilized a concept called “workers”. These were often virtual machines or something similar that would execute the code. You would usually assign individual workers a certain size based on memory and CPU.

It did have shortcomings in comparison to our contemporary serverless understanding. Serverless, by definition, should be pay-per-use. You shouldn’t be required to pay for idle workers which was a common issue with PaaS platforms.

Serverless 1.0, Enter Functions

As mentioned before, one of the issues with PaaS was that you paid for idle workers. PaaS also struggled to be truly event-driven. With some tweaking and tuning you could get that, but it wasn’t always simple.

Storytime! In March 2015, my buddy Mark and I are at South by Southwest (SxSW). We both lived in Austin so every March we would head downtown to find free events to check out. We saw that Amazon had an event where they were going to make a special announcement. The Amazon CTO, Werner Vogels was said to be present. You also didn’t need a badge to get in so we both RSVP’d to see what’s up.

On this day, we both first learned about Lambda. This was the world’s first public Functions-as-a-Service platform, better known as FaaS. They told us that this was the next evolution in Cloud Computing. With Lambda, you could now host snippets of code on AWS. There were no more idle workers, and you could auto-scale with minimal additional configuration required. Also, these snippets were event-driven by nature. This was a fully serverless platform.

The FaaS platform gained a lot of popularity which resulted in many competitors. There was OSS providers like OpenFaaS or Fission. There were of course the commercial versions to like Azure Functions and Google Cloud Functions.

Despite this major leap in cloud computer, there was a problem with FaaS. FaaS runtimes tend to be very opionated. What do I mean by that? Well, when you deploy an application to a FaaS platform, what you are doing is pushing code snippets that are then executed on the runtime framework of the FaaS platform. Let’s say that the platform can only support a specific version of Python or Node.js or that it won’t support specific libraries.

As a developer, you have one of three options.

  1. Rewrite your code so that it can execute properly in the given FaaS runtime.
  2. Wait for the FaaS platform to support your language or library.
  3. Use a different platform (another FaaS, PaaS, IaaS, etc.)

Also many people began to see FaaS as a “glue” to connect applications and sevices but not for the primary platform. You would write your actual application to run on VMs or Kubernetes but you would connect different cloud resources together with a cloud function. For example, you have a Kafka topic and the function will transform the data from the Kafka topic before sending it to another service.

Time for Serverless 2.0

While FaaS definitely has its use cases, in order for serverless to truly evolve, we need to address the “runtime issue”.

In 2018, Google announced an OSS project called Knative. Knative was meant to be executed on top of Kubernetes and streamline the deployment of applications on the platform.

It had two main components:

  • Knative Serving which would simplify the deployment of an application to Kubernetes. It would handle the revisions, the scaling, and the routing automatically with little configuration required by the developer.
  • Knative Eventing would allow developers to dynamically bind event sources to event sinks. Learn more about it here.

There was also Knative Build but long story short, it spun out and became what we now know as Tekton

This was created to help operators bring a serverless platform to Kubernetes. While you still had to manage a Kubernetes clusters and everything that came with it, it made it easier for developers to deploy applications on Kubernetes without becoming Kubernetes experts.

Eventually Knative became a part of the CNCF.

In 2019, Google announced Cloud Run. This was, in essence, a managed Knative. Now, Cloud Run doesn’t run on Kubernetes, but it is Knative Serving API compliant. This means that you could take a standard Knative YAML manifest and use it to deploy your containers to Cloud Run with no issue.

This would revolutionize serverless as you are now able to bring your own runtime. This was accomplished because you used containers. Remember, with containers, you are virtualizeing an operating system. You are literally bundling your runtime with your code in the container binary.

This model was so successful that we started to see others create competitors such as AWS Fargate and Azure Container Instances.

I would define Serverless 2.0 as a container-based platform with no infrastructure management, automatic scaling, and event-driven execution. By using containers as the basic compute unit of the cloud, we were able to simplify application development. Serverless containers further found a way to simplify the deployment of these containerized applications. I would argue that this is still just the beginning.

For example, with BuildPacks developers don’t even need to make a Dockerfile anymore. When integrated into a GitOps pipeline, serverless development becomes a breeze.

There have also been a lot of expansion in terms of building serverless platforms on top of Kubernetes. I expect as time progresses, we will see even more forms of automation around serverless containers that will simplify deployment.

We are still very much in the early days of this paradigm shift. I would be lying to you if I told you that every enterprise application today was now serverless. We still have a ways to go but due to the shift in using containers, I anticipate that we will start to see an acceleration in the adoption of serverless containers in the coming years.

What does Serverless 3.0 look like?

Now that we have caught up with Serverless 2.0, what does the future hold for serverless? What is Serverless 3.0?

I have to admit, I am not a fortune teller. I cannot tell you where serverless will go. It is very possible that serverless containers are simply a stepping stone to some other form of serverless platform.

What I can tell you is that as serverless containers become more popular and replace FaaS as a primary platform, we will see more and more adoption of serverless. If you subscribe to my newsletter, The Cloud is Serverless, I will continue to keep you updated on the latest trends in serverless.