← All Posts

The future is not Docker

Written by
Kyle Galbraith
Published on
11 January 2024
Share
It sounds strange to say, but after working on Depot for the past 18 months, I'm convinced that it's true. The future is not Docker, but containers are.
Build Docker images faster using build cache banner

What would make me feel that way?

We started building Depot back in January 2022 to solve our own pain of living with slow Docker image builds in generic CI providers like GitHub Actions, Circle, etc. We were annoyed by having to save/load layer cache over slow networks, the lack of larger managed runners, and having to rely on slow emulation for multi-platform or Arm image builds.

We grew annoyed enough to go build the solution we always wanted, so we launched Depot in July 2022.

Depot is a remote container build service that can build Docker images up to 20x faster. Every builder in Depot comes with 16 CPUs, 32GiB of memory, fully managed persistent cache up to 500GB, and native CPUs for both Intel & Arm.

What we’ve unearthed is that not only is their an impedence mismatch between CI providers and a build tool like Docker, but there is also an incredible amount of complexity and inefficiency in Docker containers & BuildKit.

The future is not Docker, but containers are.

Flashback for a moment to the dotCloud days and remember that Docker, the technology, was spun out as a pivot focused on the potential of container technology. It was built internally and went through many variations that later became Docker the company we know today.

It went through many highs and lows that are well documented across the web. Docker struggled to commercialize via the classic top down approach. They had to battle Kubernetes in an increasingly losing fight. That fight ended with Docker selling off Swarm to Mirantis, pivoting into what they are running around calling product-led-growth (PLG).

It’s rumored now that Docker is now turning north of $100M in ARR, to the amazement of the HN community.

So we must be out there to think they’re vulnerable.

But if you look at how Docker is making money today, it becomes clear they have all their chips on one spot at the moment. That spot? Docker Desktop.

It’s not any new product development, service, or feature that Docker built that is making them money. It was their lawyers, and thus licensing, that are bringing in the money. Not PLG.

Still with me? Good.

Docker Desktop isn’t going to just magically disappear. But, if you’re looking closely, you can see it’s not terribly complicated to replicate or replace. OrbStack, as a single developer attending Stanford, is giving them a run for their money.

Desktop, at its core, is really an installer that handles the plumbing between your OS and a Linux VM where the engine is running. It has other random things that they will throw around like bundled tools for Kubernetes, Compose, BuildKit, scanning, etc. Most of those tools can be used without needing to “live” in Desktop, but they’d like you to believe otherwise.

Docker wants to centralize everything into their Desktop product because that’s where they make money.

Docker is too complicated and riddled with inefficiencies

Like I said, the initial direction in building Depot wasn’t really about Docker, it was about CI providers not providing the right tools to make image builds fast. At the time we faced these problems almost daily:

  • CI providers not providing larger runners. Most support them today, but at the time, we were stuck running our own runners to get more CPUs or memory.
  • No disks. Want to know what makes a Docker build really fast? A persistent disk. Instead of giving you real disks CI providers ask you to save and load caches over networks. Networks are slow, unreliable, and often negate the performance benefits of caching.
  • Emulation is the modern day equivalent of watching paint dry. Ever needed to build a container for both Intel and Arm in GitHub Actions? You’ve undoubtedly met QEMU emulation. It’s slow, like mind boggling slow, it can push even the most basic build to over an hour.

But at that time we hadn’t really thought much about Docker the technology or company. It wasn’t until we went and solved the problem for ourselves that we unearthed mountains of complexity and inefficiencies.

The first version of Depot put BuildKit on cloud VMs with EBS volumes and orchestrated native builders for Intel & Arm. How hard or complex could that be? It turns out, freakishly complex. shouldn’t be associated with them anymore. A container is an OCI specification that

Why? I feel pretty confident in saying that nobody ever actually thought about what it would mean to run BuildKit in a cloud environment as part of a PaaS. It’s a massive monolith of code that has paths for all kinds of logic, some that’s never been used, and it just keeps going.

BuildKit is doing far too much and we believe there are far better ways to assemble containers today that don’t rely on the complexity of it.

But what about Docker?

After working with Docker, BuildKit, and containers for several years, I can’t help but feel like it’s all far too complicated and inefficient. Fundamentally, we’ve gone from representing our source code and its OS dependencies in an AMI to a Dockerfile. If you add a dependency to your package.json that needs an OS dependency, you have to update your Dockerfile.

It’s simple to create a Dockerfile that containerizes your app. It’s even easier to create one that has terrible build performance, chalked full of CVEs, is north of 20GB in size, and whatever foot gun you trip over when using Docker.

It’s all just too damn complex and inefficient.

We believe there must be a better way.

© 2024 Kyle Galbraith