What I'd really love is a middle ground between k8s and Docker Swarm that gives operators and developers what they need while still providing an escape hatch to k8s when required. k8s is immensely powerful but often feels like overkill for teams that just need simple orchestration, predictable deployments, and basic resiliency. On the other hand, Swarm is easy to use but doesn't offer the extensibility, ecosystem, or long-term viability that many organizations now expect. It feels like there's a missing layer in between: something lightweight enough to operate without a dedicated platform team, but structured enough to support best practices such as declarative config, GitOps workflows, and repeatable environments.
As I write this, I'm realizing that part of the issue is the increasing complexity of our services. Every team wants a clean, Unix-like architecture made up of small components that each do one job really well. Philosophically that sounds great, but in practice it leads to a huge amount of integration work. Each "small tool" comes with its own configuration, lifecycle, upgrade path, and operational concerns. When you stack enough of those together, the end result is a system that is actually more complex than the monoliths we moved away from. A simple deployment quickly becomes a tower of YAML, sidecars, controllers, and operators. So even when we're just trying to run a few services reliably, the cumulative complexity of the ecosystem pushes us toward heavyweight solutions like k8s, even if the problem doesn't truly require it.
> What I'd really love is a middle ground between k8s and Docker Swarm
Maybe this is what you mean:https://docs.podman.io/en/latest/markdown/podman-kube.1.html
> that gives operators and developers what they need while still providing an escape hatch to k8s when required.
Here you go, linked from the first pagehttps://docs.podman.io/en/latest/markdown/podman-kube-genera...
Podman has an option to play your containers on CRI-O as well, which is a minimal but K8s compliant runtime.
I manage my podman containers the way the article describes using NixOS. I have a tmpfs root that gets blown away on every reboot. Deploys happen automatically when I push a commit.
But I don't see this as a replacement for k8s as a platform for generic applications, more for deploying a specific set of containers to a fleet of servers with less overhead and complexity.
OP asked for something consistent and between K8s and Swarm. Ansible is just a mistake that people refuse to stop using.
So is Helm! Helm is just a mistake that people refuse to stop using.
Comparing that to docker swarm and/or k8s manifests (I guess even Helm if you're not the one developing charts), Ansible is a complete mess. You're better off managing things with Puppet or Salt, as that gives you an actual declarative mechanism (i.e. desired state like K8s manifests).
We thought this, too, when choosing Salt over Ansible, but that was a complete disaster.
Ansible is definitely designed to operate at a lower abstraction level, but modules that behave like desired state declarations actually work very well. And creating your own modules turned out to be at least an order of magnitude easier than in Salt.
We do use Ansible to manage containers via podman-systemd, but slightly hampered by Ubuntu not shipping with podman 5. It's... fine?
Our mixed Windows, Linux VM and Linux bare metal deployment scenario is likely fairly niche, but Ansible is really the only tenable solution.
In my experience, it only works decently well when a special care is taken of when writing playbooks.
There are many ways to do that. Start with a simple repo and spin up a VM instance from the cloud provider of your choice. Then integrate the commands from this article into a cloud-init configuration. Hope you get the idea.
Very easily. At the end of the day, quadlets (which are just systemd services) are just text files. You can use something like cloud-init to define all these quadlets and enable them in a single yaml file and do a completely unattended install. I do something similar to cloud-init using Flatcar Linux.