* Documentation sees it as a second-class citizen, if that (loadbalancers, volumes are heavily biased towards cloud providers)
* Many cloud-provided instances of kubernetes will always use the exact same VMs backing the nodes. So they really don't have to care all that much about what config your bare metal cluster has or needs.
RancherOS/K3S can be really quite nice for getting bare-metal clusters up & going really fast. They don't always feel the most complete though, mostly lacking around failure documentation. Even RancherOS has a bias towards cloud clusters, but it's quite easy at least to get a simple k3s cluster going. I'd personally recommend going that way. RancherOS if you're managing multiple clusters, plain k3s if you're doing just one. It'll even come with a pretty decent LoadBalancer & volumes. If you need better management of volumes, Longhorn or minio isn't bad.
microk8s/KinD are for dev-env only, and I wouldn't recommend it for any bare metal cluster. 'Fun' to screw around with though.
Edit: I had a lot of really obnoxious DNS problems, mostly due to docker daemon & how the system config would interact with k8s/k3s. Super annoying when you can get everything working in docker containers manually, but not working in k8s. Once you get your bare metal system configured to work, it'll be fine. It's also very confusing how many different network options there are, and their claims are dubious at best.
To expand on the network subsystems: canal/calico/flannel/ipvs based vs iptables based, etc. We did a bunch of low-latency (sub ms) perf testing for ipvs vs iptables. Docs say ipvs should be both faster (throughput) and lower latency. Tested evidence did not show that to be the case for both small #s of pods & large numbers of pods. This was for a small cluster, so that could be impacting the results.
Never mind that it's a rather huge PITA to switch between them all. Rancher/K3S makes it a bit easier, but still annoying.
> loadbalancers, volumes are heavily biased towards cloud providers
Can you even run a "loadbalancer" if all you have is a single machine with a single IP behind a router you don't control? I got stuck on that the last time I tried running my own kubes.
not necessarily a router you don't control, but MetalLB does provide some nice LoadBalancer constructs for a bare-metal deployment. Putting Vyos infront of it is magical!
* Documentation sees it as a second-class citizen, if that (loadbalancers, volumes are heavily biased towards cloud providers)
* Many cloud-provided instances of kubernetes will always use the exact same VMs backing the nodes. So they really don't have to care all that much about what config your bare metal cluster has or needs.
RancherOS/K3S can be really quite nice for getting bare-metal clusters up & going really fast. They don't always feel the most complete though, mostly lacking around failure documentation. Even RancherOS has a bias towards cloud clusters, but it's quite easy at least to get a simple k3s cluster going. I'd personally recommend going that way. RancherOS if you're managing multiple clusters, plain k3s if you're doing just one. It'll even come with a pretty decent LoadBalancer & volumes. If you need better management of volumes, Longhorn or minio isn't bad.
microk8s/KinD are for dev-env only, and I wouldn't recommend it for any bare metal cluster. 'Fun' to screw around with though.
Edit: I had a lot of really obnoxious DNS problems, mostly due to docker daemon & how the system config would interact with k8s/k3s. Super annoying when you can get everything working in docker containers manually, but not working in k8s. Once you get your bare metal system configured to work, it'll be fine. It's also very confusing how many different network options there are, and their claims are dubious at best.
To expand on the network subsystems: canal/calico/flannel/ipvs based vs iptables based, etc. We did a bunch of low-latency (sub ms) perf testing for ipvs vs iptables. Docs say ipvs should be both faster (throughput) and lower latency. Tested evidence did not show that to be the case for both small #s of pods & large numbers of pods. This was for a small cluster, so that could be impacting the results.
Never mind that it's a rather huge PITA to switch between them all. Rancher/K3S makes it a bit easier, but still annoying.