Kubernetes itself contains so many layers of abstraction. There are pods, which is the core new idea, and it's great. But now there are deployments, and rep sets, and namespaces... and it makes me wish we could just use Docker Swarm.
Even Terraform seems to live on just a single-layer and was relatively straight-forward to learn.
Yes, I am in the middle of learning K8s so I know exactly how steep the curve is.
The core idea isn’t pods. The core idea is reconciliation loops: you have some desired state - a picture of how you’d like a resource to look or be - and little controller loops that indefinitely compare that to the world, and update the world.
Much of the complexity then comes from the enormous amount of resource types - including all the custom ones. But the basic idea is really pretty small.
I find terraform much more confusing - there’s a spec, and the real world.. and then an opaque blob of something I don’t understand that terraform sticks in S3 or your file system and then.. presumably something similar to a one-shot reconciler that wires that all together each time you plan and apply?
Someone saying "This is complex but I think I have the core idea" and someone to responding "That's not the core idea at all" is hilarious and sad. BUT ironically what you just laid out about TF is exactly the same - you just manually trigger the loop (via CI/CD) instead of a thing waiting for new configs to be loaded. The state file you're referencing is just a cache of the current state and TF reconciles the old and new state.
Always had the conceptual model that terraform executes something that resembles a merge using a three way diff.
There’s the state file (base commit, what the system looked like the last time terraform succesfully executed). The current system (the main branch, which might have changed since you “branched off”) and the terraform files (your branch)
Running terraform then merges your branch into main.
Now that I’m writing this down, I realize I never really checked if this is accurate, tf apply works regardless of course.
and then the rest of the owl is working out the merge conflicts :-D
I don't know how to have a cute git analogy for "but first, git deletes your production database, and then recreates it, because some attribute changed that made the provider angry"
> a one-shot reconciler that wires that all together each time you plan and apply?
You skipped the { while true; do tofu plan; tofu apply; echo "well shit"; patch; done; } part since the providers do fuck-all about actually, no kidding, saying whether the plan could succeed
To me the core of k8s is pod scheduling on nodes, networking ingress (e.g. nodeport service), networking between pods (everything addressable directly), and colocated containers inside pods.
Declarative reconciliation is (very) nice but not irreplaceable (and actually not mandatory, e.g. kubectl run xyz)
After you’ve run kubectl run, and it’s created the pod resource for you, what are you imagining will happen without the reconciliation system?
You can invent a new resource type that spawns raw processes if you like, and then use k8s without pods or nodes, but if you take away the reconciliation system then k8s is just an idle etcd instance
Since they had the reconciliation system because they decided the main use case was declarative, it makes sense that they used it to implement kubectl run. But they could have done it differently.
Imagine if pods couldn't reach other and you had to specify all networks and networking rules.
Or imagine that once you created a container you had to manually schedule it on a node. And when the node or pod crashes you have to manually schedule it somewhere else.
The abstractions are fine and make a lot of sense the deeper you get into k8s.
CRs for example. CRs are an amazing design concept. The fact that everything is a CR, with some exceptions (like everything in the v1 API) and their controllers are always Pods running somewhere is incredible. Debugging cluster irrops is so much easier once you internalize this.
However, this exemplifies my desire for better UX. It would be great if "kubectl logs" accepted an CR kind, which would have the API server would automatically find its associated controller and work with the controller-manager to pump out its logs. Shit, I should make that PR, actually.
Even better would be a (much improved) built-in UI that UIops people can use to do something like this. This will become extremely important once the ex-VMware VIadmins start descending onto Kubernetes clusters, which will absolutely happen given that k8s is probably the best vCenter alternative that exists (as non-obvious as that seems right now, though I work at Red Hat and am focused on OpenShift, so take that with two grains of salt)
Even Terraform seems to live on just a single-layer and was relatively straight-forward to learn.
Yes, I am in the middle of learning K8s so I know exactly how steep the curve is.