Yup. Having migrated from a puppet + custom scripts environment and terraform + custom scripts. K8S is a breath of fresh air.
I get that it's not for everyone, I'd not recommend it for everyone. But once you start getting a pretty diverse ecosystem of services, k8s solves a lot of problems while being pretty cheap.
Storage is a mess, though, and something that really needs to be addressed. I typically recommend people wanting persistence to not use k8s.
> Storage is a mess, though, and something that really needs to be addressed. I typically recommend people wanting persistence to not use k8s.
I have actually come to wonder if this is actually an AWS problem, and not a Kubernetes problem. I mention this because the CSI controllers seem to behave sanely, but they are only as good as the requests being fulfilled by the IaaS control plane. I secretly suspect that EBS just wasn't designed for such a hot-swap world
Now, I posit this because I haven't had to run clusters in Azure nor GCP to know if my theory has legs
I guess the counter-experiment would be to forego the AWS storage layer and try Ceph or Longhorn but no company I've ever worked at wants to blaze trails about that, so they just build up institutional tribal knowledge about treating PVCs with kid gloves
Honestly this just feels like kubernetes just solving the easy problems and ignoring the hard bits. You notice the pattern a lot after a certain amount of time watching new software being built.
> Yup. Having migrated from a puppet + custom scripts environment and terraform + custom scripts. K8S is a breath of fresh air.
Having experience in both the former and the latter (in big tech) and then going on to write my own controllers and deal with fabric and overlay networking problems, not sure I agree.
In 2025 engineers need to deal with persistence, they need storage, they need high performance networking, they need HVM isolation and they need GPUs. When a philosophy starts to get in the way of solving real problems and your business falls behind, that philosophy will be left on the side of the road. IMHO it's destined to go the way as OpenStack when someone builds a simpler, cleaner abstraction, and it will take the egos of a lot of people with it when it does.
My life experience so far is that "simpler and cleaner" tends to be mostly achieved by ignoring the harder bits of actually dealing with the real world.
I used kubernete's (lack of) support for storage as an example elsewhere, it's the same sort of thing where you can look really clever and elegant if you just ignore the 10% of the problem space that's actually hard.
The problem is k8s is both a orchestration system and a service provider.
Grid/batch/tractor/cube are all much much more simple to run at scale. More over they can support complex dependencies. (but mapping storage is harder)
but k8s fucks around with DNS and networking, disables swap.
Making a simple deployment is fairly simple.
But if you want _any_ kind of ci/cd you need flux, any kind of config management you need helm.
K8s has swap now. I am managing a fleet of nodes with 12TB of NVMe swap each. Each container gets (memory limit / node memory) * (total swap) swap limit. No way to specify swap demand on the pod spec yet so needs to be managed “by hand” with taints or some other correlation.