Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In a way it's redundant to have the state twice: once in Kubernetes itself and once in the Terraform state. This can lead to problems when resources are modified through mutating webhooks or similar. Then you need to mark your properties as "computed fields" or something like that. So I am not a fan of managing applications through TF. Managing clusters might be fine, though.


That's fair, and I think why more people don't go the Terraform route. Our setup is pretty simple which helps. We treat entire clusters more like namespaces, where a cluster will run a single main application and its support services as a form of "application level availability zone". We still get bin packing for all that stuff with each cluster, but maybe not MAXIMUM BIN PACKING that we'd get if we ran all the applications in a single big cluster, and there's some EKS cost paying for more clusters.

We do sometimes have the mutating webhook stuff, for example when running 3rd party JVM stuff we tell the Datadog operator to inject JMX agents into applications using a mutating webhook. For those kinds of things we manage the application using the Helm provider pass-through I mentioned, so what Terraform stores in its state, diffs, and manages on change is the input manifests passed through Helm; for those resources it never inspects the Kubernetes API objects directly - it will just trigger a new helm release if the inputs change on the Terraform side or delete the helm release if removed.

This is not a beautiful solution but it works well in practice with minimal fuss when we hit those Kubernetes provider annoyances.


Exactly our experience hence we moved to plain K8s objects 6 years ago and never looked back.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: