it's more like "I am a tiny replaceable cog in a huge money-printing machine. if somebody is going to collect the paycheck anyway, why shouldn't it be me?"
making the hard choice is easy when you have nothing to lose, and it's even easier when you're so rich that the losses don't matter. the folks in the middle have just enough that it would really hurt to lose it. I can't say I blame them for guarding that jealously.
phone # + code is already the most popular method outside of the US, and gaining a lot more traction within the US. Login w/ Facebook does not really affect the success/failure of an app unless you really need their data for your app to work.
Having done multiple K8s migrations, I can tell you that many of the problems around migrating to k8s have little to do with actually setting up the cluster. There's dockerizing all your apps, setting up the build->deploy pipeline for each app, and fixing all the hardcoded hacks where your apps aren't properly 12-factor (failing to take config from env vars, assuming you "always" deploy to a specific cloud, etc).
The other main component of my time in a k8s engagement revolves around logging, monitoring, alerting and backups of the k8s cluster, which hopefully EKS handles for you.
All told, actually starting the k8s cluster is probably less than 10% of my time.
> All told, actually starting k8s cluster is probably less than 10% of my time.
+1.
I've found myself (with Azure ACS) re-creating clusters quite often - as they don't support upgrades. This takes minutes with my deploy scripts, replicating the state of the cluster you're copying is the main bit of work.
I agree with this but I think the responsibility for fixing all this stuff lies with the teams responsible for each app - they should be the ones dockerising apps, setting up monitoring etc - that way you're distributing the devops stuff around the team which is what you really want.
Keep going... EKS is still in preview, so you probably shouldn't rely on it in production just yet. However you could check out kops (https://github.com/kubernetes/kops), which makes provisioning a k8s cluster on AWS extremely easy. Good luck! :)
I'd highly, highly recommend using kube-aws over kops for AWS. Its far more transparent as it uses CloudFormation templates, though it does have a higher upfront time investment. Probably two hours as opposed to the 30 minutes or so that kops requires.
That's how we've been migrating from our existing terraform-based infra to kubernetes. It made the transition of our staging environment relatively painless.
EKS is likely to be in preview/beta for most if not all of next year. Even after it exits preview, you'd probably want someone else to kick the production tires. Given how Kube on EC2 is doable/manageable (that's what we do) I don't see a reason to stop. If Kube is the right choice for you now, it's the right choice on EC2, and once EKS becomes a thing you can migrate to, you can just do it.
That is fine! Depending on your deadlines you can either go ahead and implement a small scale kubernetes cluster on EC2 then when EKS is ready, you can easily migrate your workloads to it which is essentially the benefit of using kubernetes in the first place.
Whether you're spawning your own cluster or using AKS, you still need to setup a build pipeline and have your applications be in a containerizable state. And any configuration like Dockerfiles or Helm charts you can still use. Actually setting the cluster up isn't the big deal (with something like Kops, at least.)