Kubernetes on bare metal is actually pretty easy. Kubernetes on a hosted solution which doesn't have a managed version is prone to error. Usually on bare metal you can make some guarantees regarding bandwidth and storage speed. Trying to roll out a cluster on a service that can't give you these guarantees is truly a nightmare.
I would also say that if you are going to be administering clusters at your company that you should at least set up a cluster from scratch (doesn't have to be bare metal) and learn how the kubernetes control plane works by breaking it in various ways etc.
In my experience most people don't like black magic, they want something that they understand on some level. A fully managed k8s cluster is black magic, once you have set up a vanilla cluster you get a much better feeling about how the control plane works together to get things done.
I have tried several times over the past few years to install Kubernetes on bare metal, and it has never worked.
I don't mean installing it on VMs on a laptop, I mean on a real linux cluster of 8 to 32 nodes, with real networks and real switches.
Managing bare metal machines is a cakewalk compared to getting Kubernetes running in-house, at least in my experience.
Obviously the cloud providers do it, so it's possible. But IMO it is something you do only if you have a full-time admin team available to set it up and manage it. It's not by any stretch of the imagination something you install and forget about.
> Kubernetes on bare metal is actually pretty easy.
I would not call it easy at all. Last time I tried that a year ago you still needed a special load balancer to get it going (https://metallb.universe.tf). Has this changed?
That's just not true, especially if you compare it to the LoadBalancer you get on a cloud platform which usually involves zero clicks. I'm not saying it's impossible but it's definitely not "easy".
Hint: You better know what all of these are in your environment:
For a basic configuration featuring one BGP router and one IP address range, you need 4 pieces of information:
The router IP address that MetalLB should connect to,
The router’s AS number,
The AS number MetalLB should use,
An IP address range expressed as a CIDR prefix.
But then "When announcing in layer2 mode, one node in your cluster will attract traffic for the service IP."
This bottlenecking seems undesirable. At the very least, if you have one "main" traffic heavy service whichever node ends up servicing that IP address could have elevated cpu usage from processing all the network traffic via kube-proxy.
The obvious solution would be to allocate say 2 or more so ip addresses for the service with dns round robin set up. Then as long as all three are being handled by different nodes you are not bottlenecking nearly as badly. But perhaps I am missing it, but I'm not seeing a feature where you can force those two or more ip addresses to be claimed by different nodes. (If the feature is strict, then you would want more data plane nodes than IPs, so that having one node down will result in having part of the Round robin DNS unclaimed by any node).
I wouldn't use it in prod when there are other alternatives from cloud providers. But to say it is difficult to configure for a bare metal dev cluster is not true. The instructions are pretty clear.
I don't disagree, I think it is easy to install on a bare metal cluster, although I think using HA Proxy is just as easy and probably a better solution. I was just pointing out that it has been in beta for a very long time.