Aller au contenu principal

The Serverless state of art in 2024

· 8 minutes de lecture
Idriss Neumann

During the last decade, you should have heard about serverless architecture or Function as a Service (or FaaS) many times. But sometimes you might have heard the word "serverless" also for other cloud services such as Database as a Service (or DBaaS) or Container as a Service (or CaaS).

What does those things have in common to get called "serverless"? At the beginning this word implied two conditions that I'll remind in this blogpost to start. Then I'll focus on the FaaS and explain my mind on why I think it has evolved last couple of years.

The first condition is you ain't supposed to know about the infrastructure that hosts the service you're using.

  • For a DBaaS, you just get an endpoint to connect your apps with and don't have to worry about the cluster sizing, scaling, hardware capabilities...
  • For a CaaS, you just have to tell to a simple API which container image and tag to deploy and don't have to worry about the clustering of your containers orchestrators. The CaaS might be built on top of Kubernetes (or K8S) with knative and the K8S API with the knative's CRD (Custom Resource Definition) can be considered as some sort of serverless API if you don't have to worry about the K8S cluster running behind
  • For a FaaS, you just have to implement a function in a supported programing language and don't have to worry about how this function will be built as a microservice1, exposed as a webservice and trigger with multiple events2

The second condition is the "pay as you go" kind of billing on public cloud: you ain't supposed to pay for dedicated clusters but only for the network, compute3 and storage used during the runtime of your code or transactions.

For example with a serverless database, you should get billed only for the data you'll ingest or fetch and the queries you'll run and not for an entire running cluster. Same with a CaaS or FaaS you should only get billed for the runtime of your containers or the necessary compute and network used during a function's call.

We can give more well known example of serverless offers you might have heard about on big cloud players:

  • AWS Lambda the very well known FaaS engine of amazon that has kind of set the developer experience of the FaaS in my opinion
  • GCP Cloudrun which is a CaaS built on top of K8S and knative
  • GCP Cloud functions the FaaS engine of GCP built on top of Cloudrun4
  • Azure function the FaaS engine of Microsoft Azure

Moreover, the GCP approach of building everything on top of K8S with knative leads the way for other cloud providers to provide similar experiences. It's the case for Scaleway which is also providing a CaaS and a FaaS built on top of knative.

That been said, I think the key feature of serverless and especially the Function as a Service isn't the "pay as you go" but it's more about adding an abstraction layer with the infrastructure allowing the developers to ship their code more quickly and get focus only on the business logic. That's why their's also FaaS engine you can install on premises such as OpenFaaS or our own cwcloud FaaS engine.

That's also something the industry is looking for decades with tons of tools you might have encounter:

  • BPM (Business Process Management)
  • ETL (Extract Transform Load)
  • CI/CD (Continuous Integration / Continuous Deployment) pipelines orchestrators
  • Workflow engine such as Airflow, Temporal, Cadence, Apache Nifi...
  • API backend frameworks: Spring, Laravel, FastAPI... to lower the complexity of exposing your code as an API or microservices
  • Nocode / Low code
  • etc

Those tools are different, meets different needs for different populations of IT workers, for example:

  • developers who want to focus only on the business logic and not how to expose this business logic as a service
  • data scientists who needs ETL or data pipelines
  • electronics engineers and IoT makers who needs to push notifications from their sensor and trigger some treatments on their devices and enjoy to do it with a lowcode editor5
  • product owners technical enough to use BPM, nocode or lowcode to translate their needs
  • system administrators who needs to collect and transform some logs for observability purposes or schedule some tasks
  • SRE (System Reliability Engineers) who needs to setup CI/CD pipelines

However they do have something in common: all those tools will generate functions (which are sometimes called "workflow" or "job" or "pipeline" or whatever) that will require some compute capabilities and an orchestrator to trigger and launch it. Moreover, those tools are designed to get rid of the maximum of technical aspect and make the IT workers focus only on the business aspects. Sounds like the promise of the serverless, doesn't it?

Because nowadays most of those tools are still bringing their own compute orchestrator, it might be very expensive for the maintainance. Lots of companies which are recruiting multiple kind of IT workers for their different needs find themselve installing all those solutions in their infrastuctures which requires dozens of SRE to handle this heavy maintainance. I used to work with scale-up asking to install all the tools I mentioned in this blogpost in K8S. It means installing dozens of jobs orchestrator on a job orchestrator (because K8S is also a job and pipeline orchestrator). This is ironic, isn't it?

ironic-meme

There's modern tools, mainly in the CI/CD area, which are designed to work on top of K8S in a gitops and serverless way. By that I mean re-using the K8S capabilities to orchestrate ephemeral tasks or even applications. It's the case of knative of course but also Tekton or ArgoWorkflow which are pretty similar tools allowing us to define serverless pipelines or workflows without having to install runners or particular runtime unlike most of the other CI/CD tools.

However, most of the other kind of tools I mentioned earlier will require to install their own orchestrator engine and reserve lot of resources in advance in order to be able to trigger their tasks, and that ain't serverless friendly. It's the case for Talend, Airflow, Cadence, gitlab or github runners, etc... We still have to work with those tools because they've not been completely replaced by FaaS engine even if we can notice that some cloud provider are trying to provide multiple services built on top of it6.

That's why, we decided with CWCloud to implement a single FaaS engine which aims to bring several "dev XP (developer experiences) for those different populations of IT workers and which is agnostic from the infrastructure running it7.

It's only the beginning but we already provide:

  • A code editor supporting the following programing languages: Python, Go, Javascript and even Bash
  • A lowcode editor supporting Blockly which is suitable for IoT makers, lowcode developers and product owners

faas-lowcode-editor

  • An API and CLI to be able to templatize the function's creation

faas-cli

Therefore, the created functions can be exposed as:

  • HTTPs endpoints like a RESTful API
  • Async workers which can be triggered with different kind of event: scheduler, cron expressions, etc...

Finally, you can choose to invoke the function and wait for the result in the http response in a blocking way (we discouraged it but sometimes you ain't got no choice), or set async callbacks. We're supporting the following callbacks:

  • HTTP webhook
  • MQTT or WSS (websockets) queues which are very suitable for IoT makers as well

This video tutorial might give you an ideo on the current dev XP8:

faas-tutorial-player

To conclude, I believe that all those tools are the very definition of the "framework" concept for all these IT worker populations, in the sense that it allow them to focus on their business logic. The framework used to allow companies to produce more and faster, involving more people and reusing more resources, which also had the effect of increasing the quality of IT systems. That's why I strongly believe that FaaS is the new generation of modern frameworks.


  1. It can be an OCI image, a WASM binary...
  2. http calls on a webhook, messages on queues with a message bus or broker system such as Kafka or NATs, cron/scheduler events, etc...
  3. RAM, CPU, etc...
  4. Yeah cloud services are often built on top of cloud services. For example a FaaS is often built on top of a CaaS which is built on top of an IaaS (Infrastructure as a Service)
  5. We can observe that lot's of IoT company which build their device on top of chips like ESP32 are providing a lowcode editor based on Blockly, such as M5Stack which is very popular in China
  6. That's mainly the strategy of AWS which is re-using lambda for other services such as Glue ETL for datascientists for example, but also there's something for the IoT makers who want to trigger some jobs with MQTT events and multiple other examples...
  7. It can run on a raspberrypi like it can hyperscale on Kubernetes clusters using knative or keda or any other CaaS infrastructures. I plan to deep dive into the architecture of our FaaS, but it'll be for another blogpost ;-p
  8. It's in French but there is English subtitles :p