Skip to main content

One post tagged with "docker"

View All Tags

· 8 min read
Idriss Neumann

Since the rise of Kubernetes (or K8S) and the OCI (Open Container Initiative) which standardizes the containerization on Linux, we can read more and more often that using docker1 as a runtime on production infrastructures is becoming a poor choice.

In this blogpost, I'll focus on answering to the criticisms that come from people from the containerization world, who are mainly convinced that K8S is the only viable way to deploy on production. There are also criticisms that come from people who are opposed to the principle of containerization itself. I'll probably answer to those another day.

In my previous blogpost Kubernetes or not, that's the question I already detailed how K8S and its ecosystem is lowering the deployment complexity, taking care of many things by design (autoscaling, reconciliation loops, make the observability easier by design...) and beeing the most standard IaaS (Infrastructure as a Service) API specification available everywhere (on premises, on almost every cloud providers...). It's sounds like the perfect fit to setup a real deployment platform that is making the deployment very easy and seemless everywhere. Adding some tools like teleport or knative it might completely become a real PaaS (Platform as a Service) and the SRE (System Reliability Engineers) operating those clusters can be seen as Platform Engineers.

So I ain't try to convince anybody to avoid going with K8S especially when it's a matter of building a new modern platform at a company scale or providing a multitenant service. I'm pretty convinced myself it's probably the better choice nowadays. That's also why we are providing a K8S version of our DaaS2 (Deployment as a Service) solution.

That been said, if we take a few steps back, we can see multiple advantages using docker and especially docker compose in 2024.

On one hand, lot of business, regardless of their size, have already running applications on virtual servers or compute engines. It might be a first step to start by containerizing their applications and switch from a process orchestrators such as systemd or pm2 to an OCI runtime like docker or containerd. The lift and shift once all apps are containerized to move to other infrastructures such as K8S but also CaaS (Container as a Service) like ECS on AWS or Cloudrun on GCP will be easier. It's basically a "Divide and Conquer" strategy. In my experience, telling those people from the beginning to not use a containers runtime on their existing machines might discourage them to start a migration despite the benefits.

On the other hand, the compose syntax can also be seen as another standard API specification in my opinion, such as the K8S's one. It's just doesn't handle as many thing as K8S. However it might be sufficient for lot of customers and it's by far more known by most of the developers.

Few years ago, during a DevoxxFR event, I heard someone say:

Docker was designed by developers in order to let them deploy their apps in production, K8S is the answer of sysadmin trying to take the control of the production's back

It was completely true, now the new generation of sysadmins who want to keep the control are called SRE. It's not completely the same mindset of Platform Engineer who want to give the control to the feature's teams. So maybe the Platform Enginners should provide an API standard which is easier, and for me compose is a really good candidate.

Moreover this idea isn't new. That's why the kompose exists since several years, and now Docker, Inc. (the company) is working on an experimental compose bridge project3. Docker, Inc. is also working to enrich the compose specification for years taking care of lot of production requirements such as healthchecks. So in my opinion this specification is far from beeing a local tool for developers only.

Polyglotism in deployment APIs is clearly a success factor for a PaaS (built on top of K8S or not) in my opinion: the more it provides several deployment APIs known by people, the more it meets everyone's needs. Exactly the same way the more programing languages and developer experience a FaaS (Function as a Service) is providing, the more it meets everyone's needs.

That been said, you might say:

Okay using the compose specification on K8S is fine. But you were talking about using the docker engine on virtual machines. And this still ain't bringing our expected PaaS, CaaS or FaaS platform, unlike K8S.

It sounds true, because using docker in virtual machines will requires to configure and secure the virtual machine with system administrators advanced knowledges, such as configuring the firewalling rules (using iptables, ufw, firewalld whatever), configuring a reverse proxy/load balancer in front of docker, configuring the system users and their privileges, enforce the SSH connection policies... Of course docker runtime can take cares everything about the resiliency of a single process, like systemd but all the rest remains.

Indeed, it appears that if you want to stay "modern" (auditable, gitops, using some Infrastructure as code, beeing able to rollback a change with a git revert, etc) you'll have to use terraform/opentofu/pulumi/whatever to provision the infrastructure, you'll have to setup ansible4 to configure the virtual machines... and that's too much work comparing to using a K8S managed cluster with helmcharts and gitops tools like ArgoCD or FluxCD.

However this work can be optimized with a DaaS platform such as cwcloud exactly the same way you are mutualizing your helmcharts and using umbrella charts to install a tenant of your application and its dependancy. We're providing a tool where you can templatize your "environments" (or deployments) using a pretty easy GUI or CLI. Here's an example for a templatized Wordpress installation:

cwcloud-env-wordpress-1

cwcloud-env-wordpress-2

Once you've done your set of ansible roles and the injected variables and documentation's template, it'll take only one API or CLI call (or even a single clic on the GUI) to instanciate a virtual machine and perform the complete installation with a git repository containing all the ansible configurations and which will triggers update pipelines in case of change (in a modern gitops approach).

From a developer perspective, they just have to provide templates of their compose files inside an ansible role and re-use the other roles already developed and maintained by your engineering platform team. It starts to look like the way the platform engineers building their platform on top of K8S are working, right?

Okay that's really promising but still ain't seemless as a CaaS where the developer can also access to the pods... like we're doing with teleport on top of K8S or a CaaS based on knative such as Cloudrun.

There's an underrated quickwin to acheive this very easily: portainer. All it takes to have a modern platform with a nice GUI to manage all your containers on your virtual machines is a lightweight agent to run on those.

portainer-containers.png

portainer-shell.png

That's why we're proposing it with cwcloud to some of our customers. You can watch this demo to understand how you can easily transform your infrastructure built on top of virtual machines and docker into a real CaaS platfom using this combo5:

portainer_agent_demo

Portainer is also working with K8S which makes the lift and shift approach a lot easier.

To conclude, we like working with everyone answering their needs and we also like K8S very much (I already said it multiple times). Some of our customers are using K8S, some of them are perfectly fine with compute engine with a docker runtime. For example, we have customers with multitenant applications who wants to bill their own users with their cloud usage. It's more convinient this way because each customer is paying for its own compute instances instead of doing complicated FinOps with shared K8S clusters6. We have also customer who requires to have a seggregation with the data and network of their different tenants.

So yes it's still fine in 2024 to work with docker in production, you just have to find a way to align with the state of the art and modern cloud and DevOps practices :)


  1. In this blogpost, I'll refer only to the docker engine which is opensource and not Docker Desktop which isn't and manages many other things to help developers (Linux virtualization using QEMU to help handling microprocessor architectures interoperability...).
  2. You can checkout this tutorial to understand how DaaS is working with cwcloud and what's the difference between IaaS, PaaS and DaaS.
  3. This is pretty promising and unlike kompose you can develop your own mapping rules to convert your compose files into K8S manifests which will have the shape you want (it's kinda using helm to read the compose file as a value file if you want my opinion, with many helpers that make it easier). It was presented by Guillaume Lours and Nicolas De Loof from Docker, Inc. at the last DevoxxFR 🇫🇷.
  4. I only mention ansible because I consider it won the battle over puppet, chef, salt... for most of the remaining infrastructures based on virtual machines for a long time ;)
  5. Since this demo which is two years old, our design and portainer's design has improved a lot but this is still giving an idea on how it's easy to get a real CaaS platform on top of our DaaS.
  6. Yes we could use K8S with some tools like kubecost instead. However it's easier for them to directly see their customer's names associated to the compute directly in the final cloud bills.