Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, they are basically DIY-ing their own "cloud" in a way, which is what kuberetes was designed for.

It's indeed a lot of maintenance to run things thins way. You're no longer operationalizing your own code, you're also operating (as you mentioned) a CI/CD, secret management, logging, analytics, storage, databases, cron tasks, message brokers, etc. You're doing everything.

On one (if you're not doing anything super esoteric or super cloud specific) migrating kubernetes based deployments between clouds has always been super easy for me. I'm currently managing a k3s cluster that's running a nodepool on AWS and a nodepool on Azure.



I’m a little confused by the first paragraph of this comment. Kubernetes wasn’t designed to be an end-to-end solution for everything needed to support a full production distributed stack. It manages a lot of tasks, to be sure, but it doesn’t orchestrate everything that you mentioned in the second paragraph.


> Kubernetes wasn’t designed to be an end-to-end solution for everything needed to support a full production distributed stack.

I'll admit I know very little about the history of kubernetes before ~2017, BUT 2017-present kubernetes is absolutely designed/capable of being your end to end solution for everything.

Take the random list I made and the meme page at:

- CI/CD [github, gitlab, circleci]: https://landscape.cncf.io/guide#app-definition-and-developme...

- secret management [IAM, SecretsManager, KeyVault]: https://landscape.cncf.io/guide#provisioning--key-management

- logging & analytics [CloudWatch, AppInsights, Splunk, Tablue, PowerBI] : https://landscape.cncf.io/guide#observability-and-analysis--...

- storage [S3, disks, NFS/SMB shares]: https://landscape.cncf.io/guide#runtime--cloud-native-storag...

- databases: https://landscape.cncf.io/guide#app-definition-and-developme...

- cron tasks: [Built-in]

- message brokers: https://landscape.cncf.io/guide#app-definition-and-developme...

The idea is wrap cloud provider resources in CRDs. So instead of creating an AWS ELB or an Azure SLB, you create a Kubernetes service of type LoadBalancer. Then kubernetes is extensible enough for each cloud provider to swap what "service of type LoadBalancer" means for them.

For higher abstraction services (SaaS like ones mentioned above) the idea is similar. Instead of creating an S3 bucket, or an Azure Storage Account, you provision CubeFS on your cluster (So now you have your own S3 service) then you create a CubeFS Bucket.

You can replace all the services listed above, with free and open source (under a foundation) alternatives. As long as you can satisfy the requirements of CubeFS, you can have your own S3 service.

Of course you're now maintaining the equivalent of github, circleci, S3, ....

Kubernetes gives you a unified way of deploying all these things regardless of the cloud provider. Your target is Kubernetes, not AWS, Microsoft or Google.

The main benefit (to me) is with Kubernetes you get to choose where YOU want to draw the line of lock-in vs value. We all have different judgements after all

Do you see no value in running and managing kafka? maybe SQS is simple enough and cheap enough that you just use it. Replacing it with a compatible endpoint is cheap.

Are you terrified of building your entire event based application on top of SQS and Lambda? How about Kafka and ArkFlow?

Now you obviously trade one risk for another. You're trading the risk of vendor lock-in with AWS, but at the same time just because ArkFlow is free and open source, doesn't mean that it'll be as maintained in 8 years as AWS Lambda is gonna be. Maybe maybe not. You might have to migrate to another service.


> Of course you're now maintaining the equivalent of github, circleci, S3, ....

On this we agree. That's a nontrivial amount of undifferentiated heavy lifting--and none is a core feature of K8S. You are absolutely right that you can use K8S CRDs to use K8S as the control plane and reduce the number of idioms you have to think about, but the dirty details are in the data plane.


Yeah, but you significantly increase your changes of getting the data plane working if you are always using the same control plane. The control plane is setting up an S3 bucket for you. That bucket could come from AWS, CubeFS, Backblaze, you don't care. S3 is a simple protocol but same goes for more complex ones.

> and none is a core feature of K8S

The core feature of k8s is "container orchestration" which is extremely broad. Whatever you can run by orchestrating containers which is everything. The other core feature is extensibility and abstraction. So to me CRDs are as core to kubernetes as anything else really. They are such a simple concept, that custom vs built-in is only a matter of availability and quality sometimes.

> That's a nontrivial amount of undifferentiated heavy lifting

Yes it is. Like I said, the benefit of kubernetes is it gives you the choice of where you wanna draw that line. Running and maintaining GitHub, CircleCI and S3 is a "nontrivial amount of undifferentiated heavy lifting" to you. The equation might be different to another business or organization. There is a popular "anti-corporation, pro big-government" sentemnt on the internet today, right? would it make sense for say an organization like the EU to take hard dependency on GitHub or CircleCI? or should they contract OVH and run their own Github, CircleCI instances?

People always complain about vendor-lock in, closed source services, bait and switch with services, etc. with Kubernetes, you get to choose what your anxieties are, and manage them yourself.


> You significantly increase your changes of getting the data plane working if you are always using the same control plane.

That is 100% not true and why different foundational services have (often vastly) different control planes. The Kubernetes control plane is very good for a lot of things, but not everything.

> People always complain about vendor-lock in, closed source services, bait and switch with services, etc. with Kubernetes, you get to choose what your anxieties are, and manage them yourself.

There is no such thing as zero switching costs (even if you are 100% on premise). Using Kubernetes can help reduce some of it, but you can't take a mature complicated stack running on AWS in EKS and port it to AKS or GKE or vice versa without a significant amount of effort.


Well you know, we went from not knowing that kubernetes can orchestrate everything, to arguing "k8s best practices" for portability so there is room for progress.

The reality is yes, noting is zero switching costs. There are plenty of best practices to how to utilize k8s for least headache migrations. It's very doable and I see it all done all the time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: