Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would say k8s 2.0 needs: 1. gRPC/proto3-based APIs to make controlling k8s clusters easier using any programming language not just practically Golang as is the case currently and this can even make dealing with k8s controllers easier and more manageable, even though it admittedly might actually complicates things at the API server-side when it comes CRDs. 2. PostgreSQL or pluggable storage backend by default instead of etcd. 3. Clear identity-based, L7-aware ABAC-based access control interface that can be implemented by CNIs for example. 4. Applying userns by default 5. Easier pluggable per-pod CRI system where microVMs and container-based runtimes can easily co-exist based on the workload type.


All the APIs, including CRDs, already have a well described public & introspectable OpenAPI schema you can use to generate clients. I use the TypeScript client generated and maintained by Kubernetes organization. I don’t see what advantage adding a binary serialization wire format has. I think gRPC makes sense when there’s some savings to be had with latency, multiplexing, streams etc but control plane things like Kubernetes don’t seem to me like it’s necessary.


I haven't used CRDs myself for a few years now (probably since 2021), but I still remember developing CRDs was an ugly and hairy experience to say the least, partly due to the flaws of Golang itself (e.g. no traits like in Rust, no macros, no enums, etc...). With protobuf you can easily compile your definitions to any language with clear enum, oneof implementations, you can use the standard protobuf libraries to do deepCopy, merge, etc... for you and you can also add basic validations in the protobuf definitions and so on. gRPC/protobuf will basically allow you to develop k8s controllers very easily in any language.


CRDs are not tied to golang in any way whatsoever; <https://www.redhat.com/en/blog/writing-custom-controller-pyt...> and <https://metacontroller.github.io/metacontroller/guide/create...> are two concrete counter-examples, with the latter being the most "microservices" extreme. You can almost certainly implement them in bash if you're trying to make the front page of HN


I never said that CRDs are tied to Golang, I said that the experience of compiling CRDs, back then gen-controller or whatever is being used these days, to Golang types was simply ugly partly due to the flaws of the language itself. What I mean is that gRPC can standardize the process of compiling both k8s own resource definitions as well as CRDs to make the process of developing k8s controllers in any language simply much easier. However this will probably complicate the logic of the API server trying to understand and decode the binary-based protobuf resource serialized representations compared to the current text-based JSON representations.


> have a well described public & introspectable OpenAPI schema you can use to generate clients.

Last time I tried loading the openapi schema in the swagger ui on my work laptop (this was ~3-4 years ago, and i had an 8th gen core i7 with 16gb ram) it hang my browser, leading to tab crash.


Mine’s 2.3mb and did fine in whatever came up first for “openapi viewer” in Google. I did this tonight to check the api type for a vendor’s crd.


Loading it in what? I just slurped the 1.8MB openapi.json for v1.31 into Mockroon and it fired right up instantly


I think the HTTP API with OpenAPI schema is part of what's so great about Kubernetes and also a reason for its success.


1. The built-in types are already protos. Imo gRPC wouldn't be a good fit - actually will make the system harder to use. 2. Already can be achieved today via kine[0] 3. Couldn't you build this today via regular CNI? Cilium NetworkPolicies and others basically do this already

4,5 probably don't require 2.0 - can be easily added within existing API via KEP (cri-o already does userns configuration based on annotations)

[0] - https://github.com/k3s-io/kine


Apart from 1 and 3, probably everything else can be added today if the people in charge have the will to do that, and that's assuming that I am right and these points are actually that important to be standardized. However the big enterprise-tier money in Kubernetes is made from dumbing down the official k8s interfaces especially those related to access control (e.g. k8s own NetworkPolicy compared to Istio's access control related resources).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: