I don't get the etcd hate. You can run single-node etcd in simple setups. You can't easily replace it because so much of the Kubernetes API is a thin wrapper around etcd APIs like watch that are quite essential to writing controllers and don't map cleanly to most other databases, certainly not sqlite or frictionless hosted databases like DynamoDB.
What actually makes Kubernetes hard to set up by yourself are a) CNIs, in particular if you both intend to avoid cloud-provider specific CNIs, support all networking (and security!) features, and still have high performance; b) all the cluster PKI with all the certificates for all the different components, which Kubernetes made an absolute requirement because, well, prpduction-grade security.
So if you think you're going to make an "easier" Kubernetes, I mean, you're avoiding all the lessons learned and why we got here in the first place. CNI is hardly the naive approach to the problem.
Complaining about YAML and Helm are dumb. Kubernetes doesn't force you to use either. The API server anyway expects JSON at the end. Use whatever you like.
I'm going out on a limb to say you've only ever used hosted Kubernetes, then. A sibling comment mentioned their need for vanity tooling to babysit etcd and my experience was similar.
If you are running single node etcd, that would also explain why you don't get it: you've been very, very, very, very lucky never to have that one node fail, and you've never had to resolve the very real problem of ending up with just two etcd nodes running
No, I ran three-node etcd clusters for years that Kops set up. Kops deployed an etcd-operator to babysit them and take care of backups and the like. It was set-and-forget. We had controlled disruptions all the time as part of Kubernetes control plane updates, no issues.
And you know... etcd supports five-node clusters, precisely to help support people who are paranoid about extended single node failure.
What actually makes Kubernetes hard to set up by yourself are a) CNIs, in particular if you both intend to avoid cloud-provider specific CNIs, support all networking (and security!) features, and still have high performance; b) all the cluster PKI with all the certificates for all the different components, which Kubernetes made an absolute requirement because, well, prpduction-grade security.
So if you think you're going to make an "easier" Kubernetes, I mean, you're avoiding all the lessons learned and why we got here in the first place. CNI is hardly the naive approach to the problem.
Complaining about YAML and Helm are dumb. Kubernetes doesn't force you to use either. The API server anyway expects JSON at the end. Use whatever you like.