Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really don't understand all these complaints about how Kubernetes is so complex, how it's an investment etc. I am a single developer that uses Kubernetes for two separate projects, and in both cases it has been a breeze. Each service gets a YAML file (with the Deployment and Service together), then add an Ingress and a ConfigMap. That's all. It's good practice so still have a managed DB, so the choice of Kubernetes vs something running on EC2 doesn't change anything here.

Setting up a managed kubernetes cluster for a single containerized application is currently no more complicated than setting up AWS Lambda.

What you get out of it for free is amazing though. The main one for me is simplicity - each deployment is a single command, which can be (but doesn't have to be) triggered by CI. I can compare this to the previous situation of running "docker-compose up" on multiple hosts. Then, if what you're just deploying is broken, Kubernetes will tell you and will not route traffic to the new pods. Nothing else comes close to this. Zero-downtime deployments is a nice bonus. Simple scaling, just add or remove a node, and you're set.

Oh, and finally, you can take your setup to a different provider, and only need some tweaks on the Ingress.



I agree I consider kubernetes to be a simplification. I have two apps running at my company the first is a forest of php files and crontabs strewn about a handful of servers. There are weird name resolution rules, shared libs, etc. Despite my best efforts its defied organization and simplification for 2.5 years.

The second is a nice, clean EKS app. Developers build containers and I bind them with configuration drop'em like they're hot right where they belong. The builds are simple. The deployments are simple. Most importantly there are clear expectations for both operations/scaling and development. This makes both groups move quickly and with little need for coordination.


It’s really complicated when something goes wrong. That is my only criticism. Particularly in the various CNI layers out there. You really have to know exactly how everything works to get through those days and that is beyond the average person who can create a docker container and push it into the cluster which is the usual success metric.


Networking is complex, unfortunately, and cloud networking has a legacy of trying to support things that never should be supported (stretched L2s, 10./8 everywhere, etc.)

Things get much simpler if you try to limit CNI complexity by going towards at least conceptually simpler tooling that matches original, pre-CNI design of k8s, IMHO.


99% of people aren't going to use a different CNI plugin to what their managed distribution ships with. Same goes for peeking under the covers of storage plugins, kubelet config, etc.

You pay AWS/GCP for that these days and just use the API.


It was AWS who had trouble fixing our CNI issues…


If AWS broke your CNI that caused you problems, it is also possible for AWS to break networking.

I have used Managed K8s for 4 years and literally never had any problems with CNI. My clusters runs with no problems.


Yes we have no problems for 2 years. Then we had problems.


In what cases would I have to worry about that?


If it breaks. If you have an outage because of weird K8s networking issues (and I've seen them) you'll suddenly care very much.


Yes same. We had one which we had to recreate a whole EKS cluster because AWS couldn’t fix it too.

I don’t sleep these days.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: