Can anyone here provide some info or make a comparison with the Azure Container Service here, or any AWS option?
I'm going to go through all of the service offerings this weekend - from Docker Inc's Docker for Azure and Docker for AWS to the native container services on each.
Azure Container Service is simply a PaaS-ish offering of Swam, DC/OS, or Kubernetes. It still spins up VMs that you can log into, but handles deployment/provisioning of the product, and makes some assumptions about your use case. It's a great way to push a button and have a "real" deployment of those services to evaluate, especially if your goal is a platform-agnostic target. I work for an Azure-focused cloud consultancy and for any serious production environment we still build out a more custom deployment using a combination of Terraform, Chef, Cloud-Init, CoreOS, etc.
Thanks for the comment - I'm going to try it out. Want to email me your consultancy (email in profile) - I work with a bunch of diff companies who use Azue
> Also them starting the project along with the knowledge they have internally scaling containers helps.
FWIW, Most of this are advertising gimmicks though and Google has a pretty different internal infrastructure for orchestrating containers that has hardly much to do with K8s.
Google Kubernetes Engine still runs Kubernetes. I never looked at the Borg or Omega source code and had never been worked on a Google team. It is my understanding that there are some key insights developed from Borg and Omega that became part of the core concepts of Kubernetes that gives it an edge over other open-source orchestration systems. These include grouping containers into pods and using label selectors.
Yes, many of Google's technical leads working on Kubernetes and Container Engine are former members of the Borg and Omega teams, so Kubernetes and our hosted version, Container Engine, both benefit from what we learned building those other systems. (I think our 5 most-senior engineers have ~40 years of container management systems experience between them now?)
And it's not just the rather-large core team directly on GKE and k8s, nor the related products like Container Registry [1], Container Builder [2], and Container-Optimized OS [3]. GKE and k8s benefit in other ways too: Google's internal kernel team helps debug customer issues when we trace them to the kernel, and people like Kees Cook are helping with the upstream Kernel Self-Protection Project [4] that make container technology more secure. In addition to that kernel work, Google also has rather-decent security teams and they work with us to improve security in other ways too.
Finally, re: toomuchtodo's question, "Why opt for Google if you're going to use containers in Kubernetes?" Because we hope you find that Container Engine is the best place to run Kubernetes -- and benefit from the other parts of Google Cloud Platform. If you ever find GKE is not that place, and you don't derive value from the rest of GCP, then exactly as toomuchtodo puts it: "You can even move to your own datacenter at some point (relatively) easily."
This is not correct. I've worked with both borg and k8s and k8s is effectively a rewrite of borg using the same container infrastructure. There are differences, but they aren't meaningful.
I can think of a couple that seem meaningful, like cluster state management architecture (borgmasters/checkpointing vs. everything lives under consensus in etcd -- ish), which seem to have introduced real difficulty in bringing Kubernetes to parity with Borg, particularly in the scale department. Then I see a comment like yours and realize again that thought was put into it by much smarter people than me, but that one remains a perceived change that confuses me as an outsider. I'm familiar with the flaws of the borgmaster architecture, but the etcd architecture seems like an oddly drastic rejiggering to address them; I say that with a surface-level understanding of both systems based on a very short exposure to Borg several years ago, so I'm probably completely wrong or out-of-date here.
Am I totally off-base, if you're able to speak to this? (Maybe it exists and I've missed it, but I'd love to see a blow-by-blow of the differences and their rationale, too, because that'd be valuable insight on how Google learns.)
I'm not sure what you're talking about with consensus and etcd. That doesn't have anything to do with the end-user experience using k8s on a product like Google Container Engine.
When I say k8s is like borg I mean: it has the same concepts of tasks, jobs, and allocs. The scheduling of those is handled by a k8s scheduler which resembles the borgmaster scheduler (a lot of hand waving here), and the containers themselves execute in an environment much like the borglet provides for containers.
Many of the valuable features provided by borgmaster and borglet are provided in k8s and you configure them through similar mechanisms.
Beyond that, how they are implemented specifically, there are a ton of differences but for an end user who is just using k8s, not setting up and managing k8s infrastructure, it's conceptually isomorphic.
https://cloud.google.com/container-engine/
Also them starting the project along with the knowledge they have internally scaling containers helps.