Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t get these complaints AT ALL. I don’t use kubernetes, simply because I am running apps in managed environments, and have been using docker-compose with vscode remote to emulate those environments. But being able to define your resources and how they are linked via a schema, makes sense even from a dev perspective. Isn’t that all that kubernetes is doing at it’s most basic? Sounds like that saves time to me over manually setting everything up for every project you work on.


From what I've seen with Kubernetes, the problem is that in order to define the resources and links with a schema, it needs several abstractions and extra tooling. So you not only need to define the schema (which isn't that trivial) but also understand what Kubernetes does, how to work with it, and how to solve problems. It's a tool that adds complexity and difficulty, if you're not using the advantages it provides (mainly multi-node management and scale capabilities) you're just handicapping yourself.


This is a common misconception. k8s isn't about scale, multi-node or even reliability/resiliency. We had solutions for all of that before it came along.

It's about having a standard API for deployment artifacts.

The k8s manifests are trivial (if verbose) the complexity comes from running the underlying layer which when you are small you simply outsource to AWS or GCP.

There is some k8s know-how that is table stakes for a good experience, namely knowing what extra stuff you need on top of a base k8s cluster. i.e certmanager, external-dns.

Overall it's a lot less required knowledge than it takes to manipulate lower level primitives like GCP/AWS directly or be capable of setting up standalone boxes.

Before anyone starts with "but serverless!" you need to consider the spagetti you end up with if you go down that route, API Gateway, 100x Lambdas and thousands upon thousands of lines of Terraform boilterplate does not a happy infra team make.


> Overall it's a lot less required knowledge than it takes to manipulate lower level primitives like GCP/AWS directly or be capable of setting up standalone boxes.

I do not agree with this. I have tried to start Kubernetes and it's far more confusing to set up than just setting up something on a standalone box, mainly because it looks like the set of knowledge I need to set up anything on Kubernetes is a superset of what I need to set up the same thing in bare Linux.


It's really not unless you are doing a half-assed job of setting up bare boxes.

Lets see to get something even half-reasonable on a bare box you need the minimum:

process monitor: For this you can use systemd, supervisord, etc.

logging: rsyslog or similar.

http reverse proxy: nginx or haproxy

deployment mechansim: probably scp if you are going this ghetto but git pull + build and/or pull from s3 are all common at this level plus a bunch of bash to properly restart things.

backups: depends on persistence, litestream for sqllite these days, pgdump/whatever for RDBMS, wal shipping if you are running the DB on your own boxes.

access control: some mechanism to manage either multiple users + homes or shared user account with shared authorized_keys.

security: lock down sshd, reverse proxy, etc. stay on top of patching.

So are bare minimum there is a ton of concepts you need to know, you are essentially regressing back into the days of hardcore sysadmins... just without all the hardcore sysadmins to do the job correctly (because they are now all busy running k8s setups).

Meanwhile to deploy a similarly simply app to k8s (assuming you are buying managed k8s) you need to know the following things:

container + registry: i.e Dockerfile + docker build + docker push gcr.io/mything.

deployment: Describes how to run your docker image, what args, how much resources, does it need volumes, etc. You can go ghetto and not use service accounts/workload-identity and still be in better shape than bare boxes.

service: how your service exposes itself, ports etc.

ingress: how to route traffic from internet into your service, usually HTTP host, paths etc.

Generally this is equivalent but with way less moving parts.

There is no ssh server, patching is now mostly just a matter of pressing the upgrade button your provider gives you. You now only need to know docker, kubectl and some yaml instead of systemd control files, sshd_config, nginx.conf, bash, rsync/tarsnap/etc.

K8s reputation as being hard to run is deserved.

K8s reputation as being hard to use is grossly undeserved.


I mean, container+registry is possibly more complex to start and maintain than just creating a service file with SystemD. Deployment can be as easy of "systemctl restart service", and seems to me that configuring all of those resources in Kubernetes is far more difficult that just setting up a simple service on a bare box. Not to mention that you could use Docker too.

And by the looks of it, ingress doesn't seem trivial, not only you need to understand how NGINX works for reverse proxy but also understand how Kubernetes interacts with NGINX. You also ignored backups for Kubernetes, if you're just doing full disk backups you can do that with regular boxes, specially if they're just VMs.

> There is no ssh server, patching is now mostly just a matter of pressing the upgrade button your provider gives you.

No SSH server might be an advantage for you, but for me it means it's far more difficult to know what's happening when Kubernetes doesn't do what I want it to do.

> You now only need to know docker, kubectl and some yaml instead of systemd control files, sshd_config, nginx.conf, bash, rsync/tarsnap/etc.

But I still need to know how to set up all applications I use, which is the most important part. And understanding systemd service files, sshd_config, nginx.conf and rsync is far, far easier than understanding Kubernetes. Kubernetes manages those things so you actually need to understand both the underlying concepts and also the abstractions Kubernetes is making.

You are also comparing self-hosted with managed, and then only mentioning the disadvantages of self-hosted. Of course with a managed setup you don't need to worry about SSH, but you have to worry about actually dealing with the manager service. If you were to set up self-hosted Kubernetes you'd still need to learn about SSH, or about account management. That's not something about Kubernetes but about self-hosted or managed.


You don't run your own registry (except in very rare circumstances where that makes sense). You author Dockerfile (or better yet use a tool like jib that creates containers automatically without even a Docker daemon), then you push to hosted registry.

Ingress is trivial to use. Internally it's less trivial but you don't need to peek inside unless you manage to break it which generally speaking, you won't; even if you think whatever you are doing is very special it probably does it already. That is the benefit of literal thousands of teams using the exact same abstraction layer.

You don't have SSH (this is objectively good), instead you can get exec on a container if you need it (and that container contains a shell). You can now control on a fine grained and integrated fashion exactly who is allowed to exec into a container and you get k8s audit events for free. (and if you are using a hosted system like GKE then it automatically flows into said providers audit system).

The point is you can buy managed k8s, there isn't an equivalent for old school sysadmin that doesn't amount to outsourcing to a body shop.

Given that you haven't managed to identify any of k8s real downsides here they are:

It's a bunch of yaml. Yeah I don't like that either but there are tools like Tanka that make that suck less.

It's complex under the hood. When your process runs its in a network, fs and pid namespaces and you have cgroup managing resources. There is probably some sort of overlay network or some other means of granting each container an IP and making that routable (BGP, OSPF, etc). Interaction between ingress, service and pods is through an indirection layer called endpoints that most people don't even realize exists. Scheduling is complex, there are tunables for affinity, anti-affinity, it also needs to interact with cluster autoscaling. Autoscaling itself is a complex topic where things like hot capacity etc aren't exactly sorted out yet. QoS, i.e pod priority isn't well understood by most. When will a pod be pre-empted? What is the difference between eviction/pre-emption etc? Most people won't be able to tell you. This means when things go wrong there are a lot of things that could be wrong, thankfully if you are paying someone else for your cluster a) it's their problem b) it's probably happening to -all- of their customers so they have a lot of incentive to fix it for you.

Multi-cluster is a mess, no reasonable federation options in sight. Overlay networking makes managing network architecture when involving multiple clusters more difficult (or external services outside of said cluster).

Ecosystem has some poor quality solutions. Namely helm, kustomize, Pulumi, etc. Hopefully these will die out some day and make way for better solutions.

Yet for all of these downsides it's still clearly a ton better than managing your own boxes manually. Especially because you are -exceedingly- unlikely to encounter any of these problems above at any scale where self-hosting would have been tennable.

I think the best way to think about k8s is it's the modern distributed kernel. Just like Linux you aren't expected to understand every layer of it, merely the interface (i.e resource API for k8s, syscalls/ioctl/dev/proc/sysfs for Linux). The fact everyone is using it is what grants it the stability necessary to obviate the need for that internal knowledge.


Just want to note that we have a mix of ECS and stuff still running on bare VMs using tools like systemd, bash script deploys, etc..., and I 100% agree with you. Once someone understands container orchestration platform concepts, deploying to something like k8s or ECS, is dead simple.


> You don't run your own registry (except in very rare circumstances where that makes sense). You author Dockerfile (or better yet use a tool like jib that creates containers automatically without even a Docker daemon), then you push to hosted registry.

Dockerfiles aren't trivial. I've helped migrate some services to docker and it's not "just put it in Docker and that's it". Specially because most applications aren't perfectly isolated.

> Ingress is trivial to use. Internally it's less trivial but you don't need to peek inside unless you manage to break it which generally speaking, you won't; even if you think whatever you are doing is very special it probably does it already. That is the benefit of literal thousands of teams using the exact same abstraction layer.

Literal thousands of teams also use Nginx and still doesn't mean that there aren't configuration errors, issues and other things that are to debug. Not to mention that a lot of applications can run without requiring a reverse proxy.

> You don't have SSH (this is objectively good), instead you can get exec on a container if you need it (and that container contains a shell). You can now control on a fine grained and integrated fashion exactly who is allowed to exec into a container and you get k8s audit events for free. (and if you are using a hosted system like GKE then it automatically flows into said providers audit system).

And that's cool if you need that, but "ssh user@machine" looks far easier.

> The point is you can buy managed k8s, there isn't an equivalent for old school sysadmin that doesn't amount to outsourcing to a body shop.

Look, the other day I had to set up a simple machine to monitor some hosts in a network. I've Ansible to automate it, but ultimately it boils down to "sudo apt install grafana prometheus; scp provisioning-dashboards; scp prometheus-host-config; scp unattended-upgrades-config" plus some hardening configs. I don't need redundancy, uptime is good enough with that. If Prometheus can't connect to the hosts, I can test from the machine, capture traffic or whatever and know that there isn't anything in the middle. If Grafana doesn't respond I don't need to worry whether the ingress controller is working well or if I missed some configuration.

Could I run that with Kubernetes? Well, managed k8s is already out of the window because the service is internal to an enterprise network. And self-hosted kubernetes? I still need to do the same base linux box configuration, plus setting up k8s, plus setting up the infra and networking with k8s plus the tools I actually want to use.

And that's the point. I am not trying to identify any k8s downsides, what I said that setting up k8s looks far more confusing than setting up a standard Linux box, and by the things you say I am even more convinced than before than it is indeed. I still need to understand and run the programs I want (which is most of the cost of deployment), but I also need to understand Kubernetes and, if I am using a managed platform, I need to understand how the managed platform works, configure it, link it with my system... In other words, not only do I need the general knowledge for application deployment but also the specific implementation in Kubernetes.


It would be great if otherwise smart people Also learn to know when what’s trivial to them is not trivial to the masses. K8s is not trivial, not to me at the least. I’m not some super duper engineer but I do alright? That’s all I can say at the least.


Have you made a good faith attempt to use k8s? Or are you just regurgitating how hard it is to use based on what you hear on the Internet?

My experience is even mediocre engineers are capable of understanding k8s concepts with minimal assistance assuming things are either sufficiently standard (i.e google-able) or well documented with any company specific practices.

K8s == hard to run, k8s != hard to use.


I have made a good faith attempt at using it, yes. Is spending an entire week on it good faith?

I did get everything running but also saw so many settings and features I had to take on faith as being handled without understanding it. I just chose not to bother further because this seems like a minefield of problems in the future as we on boarded other engineers. The number of times I’ve seen production deployments go down due to k8s misconfiguration on other teams has only validated my concerns.


So what you are saying is everything worked? That sounds like k8s did its job.

You aren't meant to fully understand it in a week, anymore than you are expected to fully understand all of sysadmin in a week.

Just because you don't know what every single directive in nginx config does doesn't mean you can't use it effectively and learn what they mean when the time comes.

k8s isn't much different. You don't need to know what a liveliness probe is the first time you use it, you can learn about it as you go (most likely when you run into a badly behaved program that needs forced restarts when it hangs).

Ofcourse if you are running it yourself that is entirely different, you really do need to know how it works to do that but you should be using hosted k8s unless you have hardcore systems folk and actually need that.


You’re right that nginx is also complicated , but seems like we are talking about different alternatives.

The solution I went with, given my limited knowledge of some of these things, was to use elastic beanstalk. You write your flask application, upload the zip and that’s it pretty much. You get a UI to do all the configurations and for the most part nothing there is hard to decipher or google. The only hiccup might be when you’re trying to connect it to RDS and to the external internet but even that is straightforward as long as you follow the right instructions. We run apps that power entire SaaS organizations and this system seems to be more than sufficient. Why complicate further? We have other fish to fry anyway.


If your apps are simple enough to use PaaS then all power to you. :)


I am not a SDE but I did try one weekend to set-up a self managed k8s cluster in the previous company. I did have some previous knowledge of using k8 in GCP. Although when trying to set it up on a completely new cluster outside of GCP, I did run into some issues where I felt I was out of my depth. I think unknowingly I exposed whole of the cluster to the public internet.

On the other hand, with docker swarm, I built a cluster in less than a day. So yeah setting up k8 isn't as trivial as you are making it out to be.


Friends don’t let friends run their own k8s instances. You should use cloud managed deployment and rely on their security services.

For comparison, you can just as easily expose/hide your VM as your managed k8s cluster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: