Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 3. Prefer fewer larger clusters with namespaces (and node pools if needed) to lots of tiny clusters.

This is interesting - last time I worked with Microsoft Engineers from Azure - they said exactly the opposite.

One workload = One cluster.

„There are too many shared resources in Kubernetes that can leak collateral damage from one workload to another”.



Azure might also require special precautions. Honestly, I've seen Azure have a lot of networking issues, for example. But this is based on scuttlebutt and limited personal experience.

I've found that on GCS, certain workloads benefit from a dedicated node pool. This gets them their own CPU and RAM and volume I/O. Yes, I could imagine that there are shared Kubernetes control plane resources that might be affected, but I haven't seen that with any of our workloads. It might get more complicated if you have lots of in-cluster networking.

But none of this is my area of expertise. I just think that Kubernetes can mostly be pretty pleasant in practice for companies that have outgrown PaaS offerings like Heroku and Render.com.


Not really. You can do fairly large clusters, you need differently sized node pools. For example - we run apache NiFi is AKS which is a complete memory and cpu hog. We have a node pool 16cpu/64g ram for that workload which we specify a node selector. Microservices we use a different node pool. System services run on the default node pool.

If you're running Azure functions with KEDA - setup a nodepool for that with a lower cpu/memory footprint.


It's really quite frustrating to find that out from experience. Things like operators, anything Cluster*, named resources, etc.

If you're doing relatively simple things with the cluster, then you can do namespaces. The more custom shit you do, the better off you are with true isolation.


What is their definition of a workload? Do they want a cluster per microservice? Per application? Per customer?


We had a different set of microservices doing specific part of the system.

One part was responsible for data transformation and the other was responsible for user modifications.

Its hard to tell whats the cutting point it depends on the system architecture.


Wouldn't a name space make more sense than a whole cluster?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: