It's really not unless you are doing a half-assed job of setting up bare boxes.
Lets see to get something even half-reasonable on a bare box you need the minimum:
process monitor: For this you can use systemd, supervisord, etc.
logging: rsyslog or similar.
http reverse proxy: nginx or haproxy
deployment mechansim: probably scp if you are going this ghetto but git pull + build and/or pull from s3 are all common at this level plus a bunch of bash to properly restart things.
backups: depends on persistence, litestream for sqllite these days, pgdump/whatever for RDBMS, wal shipping if you are running the DB on your own boxes.
access control: some mechanism to manage either multiple users + homes or shared user account with shared authorized_keys.
security: lock down sshd, reverse proxy, etc. stay on top of patching.
So are bare minimum there is a ton of concepts you need to know, you are essentially regressing back into the days of hardcore sysadmins... just without all the hardcore sysadmins to do the job correctly (because they are now all busy running k8s setups).
Meanwhile to deploy a similarly simply app to k8s (assuming you are buying managed k8s) you need to know the following things:
deployment: Describes how to run your docker image, what args, how much resources, does it need volumes, etc. You can go ghetto and not use service accounts/workload-identity and still be in better shape than bare boxes.
service: how your service exposes itself, ports etc.
ingress: how to route traffic from internet into your service, usually HTTP host, paths etc.
Generally this is equivalent but with way less moving parts.
There is no ssh server, patching is now mostly just a matter of pressing the upgrade button your provider gives you. You now only need to know docker, kubectl and some yaml instead of systemd control files, sshd_config, nginx.conf, bash, rsync/tarsnap/etc.
K8s reputation as being hard to run is deserved.
K8s reputation as being hard to use is grossly undeserved.
I mean, container+registry is possibly more complex to start and maintain than just creating a service file with SystemD. Deployment can be as easy of "systemctl restart service", and seems to me that configuring all of those resources in Kubernetes is far more difficult that just setting up a simple service on a bare box. Not to mention that you could use Docker too.
And by the looks of it, ingress doesn't seem trivial, not only you need to understand how NGINX works for reverse proxy but also understand how Kubernetes interacts with NGINX. You also ignored backups for Kubernetes, if you're just doing full disk backups you can do that with regular boxes, specially if they're just VMs.
> There is no ssh server, patching is now mostly just a matter of pressing the upgrade button your provider gives you.
No SSH server might be an advantage for you, but for me it means it's far more difficult to know what's happening when Kubernetes doesn't do what I want it to do.
> You now only need to know docker, kubectl and some yaml instead of systemd control files, sshd_config, nginx.conf, bash, rsync/tarsnap/etc.
But I still need to know how to set up all applications I use, which is the most important part. And understanding systemd service files, sshd_config, nginx.conf and rsync is far, far easier than understanding Kubernetes. Kubernetes manages those things so you actually need to understand both the underlying concepts and also the abstractions Kubernetes is making.
You are also comparing self-hosted with managed, and then only mentioning the disadvantages of self-hosted. Of course with a managed setup you don't need to worry about SSH, but you have to worry about actually dealing with the manager service. If you were to set up self-hosted Kubernetes you'd still need to learn about SSH, or about account management. That's not something about Kubernetes but about self-hosted or managed.
You don't run your own registry (except in very rare circumstances where that makes sense). You author Dockerfile (or better yet use a tool like jib that creates containers automatically without even a Docker daemon), then you push to hosted registry.
Ingress is trivial to use. Internally it's less trivial but you don't need to peek inside unless you manage to break it which generally speaking, you won't; even if you think whatever you are doing is very special it probably does it already. That is the benefit of literal thousands of teams using the exact same abstraction layer.
You don't have SSH (this is objectively good), instead you can get exec on a container if you need it (and that container contains a shell). You can now control on a fine grained and integrated fashion exactly who is allowed to exec into a container and you get k8s audit events for free. (and if you are using a hosted system like GKE then it automatically flows into said providers audit system).
The point is you can buy managed k8s, there isn't an equivalent for old school sysadmin that doesn't amount to outsourcing to a body shop.
Given that you haven't managed to identify any of k8s real downsides here they are:
It's a bunch of yaml. Yeah I don't like that either but there are tools like Tanka that make that suck less.
It's complex under the hood. When your process runs its in a network, fs and pid namespaces and you have cgroup managing resources. There is probably some sort of overlay network or some other means of granting each container an IP and making that routable (BGP, OSPF, etc).
Interaction between ingress, service and pods is through an indirection layer called endpoints that most people don't even realize exists.
Scheduling is complex, there are tunables for affinity, anti-affinity, it also needs to interact with cluster autoscaling. Autoscaling itself is a complex topic where things like hot capacity etc aren't exactly sorted out yet.
QoS, i.e pod priority isn't well understood by most. When will a pod be pre-empted? What is the difference between eviction/pre-emption etc? Most people won't be able to tell you.
This means when things go wrong there are a lot of things that could be wrong, thankfully if you are paying someone else for your cluster a) it's their problem b) it's probably happening to -all- of their customers so they have a lot of incentive to fix it for you.
Multi-cluster is a mess, no reasonable federation options in sight. Overlay networking makes managing network architecture when involving multiple clusters more difficult (or external services outside of said cluster).
Ecosystem has some poor quality solutions. Namely helm, kustomize, Pulumi, etc. Hopefully these will die out some day and make way for better solutions.
Yet for all of these downsides it's still clearly a ton better than managing your own boxes manually.
Especially because you are -exceedingly- unlikely to encounter any of these problems above at any scale where self-hosting would have been tennable.
I think the best way to think about k8s is it's the modern distributed kernel. Just like Linux you aren't expected to understand every layer of it, merely the interface (i.e resource API for k8s, syscalls/ioctl/dev/proc/sysfs for Linux). The fact everyone is using it is what grants it the stability necessary to obviate the need for that internal knowledge.
Just want to note that we have a mix of ECS and stuff still running on bare VMs using tools like systemd, bash script deploys, etc..., and I 100% agree with you. Once someone understands container orchestration platform concepts, deploying to something like k8s or ECS, is dead simple.
> You don't run your own registry (except in very rare circumstances where that makes sense). You author Dockerfile (or better yet use a tool like jib that creates containers automatically without even a Docker daemon), then you push to hosted registry.
Dockerfiles aren't trivial. I've helped migrate some services to docker and it's not "just put it in Docker and that's it". Specially because most applications aren't perfectly isolated.
> Ingress is trivial to use. Internally it's less trivial but you don't need to peek inside unless you manage to break it which generally speaking, you won't; even if you think whatever you are doing is very special it probably does it already. That is the benefit of literal thousands of teams using the exact same abstraction layer.
Literal thousands of teams also use Nginx and still doesn't mean that there aren't configuration errors, issues and other things that are to debug. Not to mention that a lot of applications can run without requiring a reverse proxy.
> You don't have SSH (this is objectively good), instead you can get exec on a container if you need it (and that container contains a shell). You can now control on a fine grained and integrated fashion exactly who is allowed to exec into a container and you get k8s audit events for free. (and if you are using a hosted system like GKE then it automatically flows into said providers audit system).
And that's cool if you need that, but "ssh user@machine" looks far easier.
> The point is you can buy managed k8s, there isn't an equivalent for old school sysadmin that doesn't amount to outsourcing to a body shop.
Look, the other day I had to set up a simple machine to monitor some hosts in a network. I've Ansible to automate it, but ultimately it boils down to "sudo apt install grafana prometheus; scp provisioning-dashboards; scp prometheus-host-config; scp unattended-upgrades-config" plus some hardening configs. I don't need redundancy, uptime is good enough with that. If Prometheus can't connect to the hosts, I can test from the machine, capture traffic or whatever and know that there isn't anything in the middle. If Grafana doesn't respond I don't need to worry whether the ingress controller is working well or if I missed some configuration.
Could I run that with Kubernetes? Well, managed k8s is already out of the window because the service is internal to an enterprise network. And self-hosted kubernetes? I still need to do the same base linux box configuration, plus setting up k8s, plus setting up the infra and networking with k8s plus the tools I actually want to use.
And that's the point. I am not trying to identify any k8s downsides, what I said that setting up k8s looks far more confusing than setting up a standard Linux box, and by the things you say I am even more convinced than before than it is indeed. I still need to understand and run the programs I want (which is most of the cost of deployment), but I also need to understand Kubernetes and, if I am using a managed platform, I need to understand how the managed platform works, configure it, link it with my system... In other words, not only do I need the general knowledge for application deployment but also the specific implementation in Kubernetes.
Lets see to get something even half-reasonable on a bare box you need the minimum:
process monitor: For this you can use systemd, supervisord, etc.
logging: rsyslog or similar.
http reverse proxy: nginx or haproxy
deployment mechansim: probably scp if you are going this ghetto but git pull + build and/or pull from s3 are all common at this level plus a bunch of bash to properly restart things.
backups: depends on persistence, litestream for sqllite these days, pgdump/whatever for RDBMS, wal shipping if you are running the DB on your own boxes.
access control: some mechanism to manage either multiple users + homes or shared user account with shared authorized_keys.
security: lock down sshd, reverse proxy, etc. stay on top of patching.
So are bare minimum there is a ton of concepts you need to know, you are essentially regressing back into the days of hardcore sysadmins... just without all the hardcore sysadmins to do the job correctly (because they are now all busy running k8s setups).
Meanwhile to deploy a similarly simply app to k8s (assuming you are buying managed k8s) you need to know the following things:
container + registry: i.e Dockerfile + docker build + docker push gcr.io/mything.
deployment: Describes how to run your docker image, what args, how much resources, does it need volumes, etc. You can go ghetto and not use service accounts/workload-identity and still be in better shape than bare boxes.
service: how your service exposes itself, ports etc.
ingress: how to route traffic from internet into your service, usually HTTP host, paths etc.
Generally this is equivalent but with way less moving parts.
There is no ssh server, patching is now mostly just a matter of pressing the upgrade button your provider gives you. You now only need to know docker, kubectl and some yaml instead of systemd control files, sshd_config, nginx.conf, bash, rsync/tarsnap/etc.
K8s reputation as being hard to run is deserved.
K8s reputation as being hard to use is grossly undeserved.