Hacker News new | past | comments | ask | show | jobs | submit login

I think it's important to judge the support model in context of what kubernetes is...

As an API based cloud-management platform Kubernetes has a robust, stable, core topped with multiple layers of abstractions that build on, and use, one another. There are relatively few dead-ends in the projects history, and those have all tend to be superseded by a much nicer, much smarter, much easier abstractions.

An operating system going out of support after 9 months is a total show stopper, naturally. But Kubernetes runs on top of those 'stable' layers. It's stability is in its primitives and the physical API contracts. The primitives being introduces today don't magically impact my production cluster, and because kubernetes is built like a russian nesting doll of APIs there is very little incentive for the project to make any changes, much less hasty changes, much less poorly thought out changes, to the core APIs. Unlike many other dependencies the kubernetes changes impact deployment and management, but will rarely impact application-level concerns. If it worked in old kubernetes, it's generally gonna work in new kubernetes.

I feel absolutely no support-pressure to upgrade my on-prem installations when my deployments are cross-compatible with the updated versions on my cloud providers.

I feel a lot of developer-giddiness-pressure to upgrade my on-prem installations because me and my devs want the cool new things they're baking into the platform...

From my experience with k8s the issues are all related to running a linux cluster that uses docker and iptables -- a pretty unavoidable pain to run docker containers on a linux cluster, IMO. Support in this context broadly means a stable API, and a stable core, and Kubernetes has had those for a while now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: