> My favorite way to create a network between all my services hosted in different AWS accounts is to share a VPC from a network account into all my service accounts and use security groups to authorize service-to-service communication. There’s no per-byte tax, zonal architectures are easy to reason about, and security groups work just like you expect.
That's gold advice. I wish AWS RAM supported more services (like AWS EKS).
A small complain: working with AWS SSO is a bit tedious. My current solution is to share my ~/aws/config with everyone so we all have the same profile names and scripts can work for everyone.
Depending on your SSO provider, you can list all the roles using saml2aws[0] and then parse the output so you can generate the relavant config. It is a bit tedious, but it works.
Synchronizing ~/aws/config gets harder and harder as your team grows because there are both more people who need to receive changes and more people making changes.
I think the human-readable names for AWS accounts need to be part of the account, not part of the laptop. Substrate [1] does this so that you can type commands like `substrate assume-role -domain example -environment production -quality beta` [2] to get where you're going.
I have thought a great deal about whether I also want EKS clusters to officially support nodes in multiple AWS accounts. On the one hand, having the option to create that additional low-level isolation would be lovely, even and maybe even especially if I didn't always take it. On the other hand, isolating two things from each other but then tying them to the same Kubernetes cluster upgrade schedule feels wrong.
In the end, I decided that if I care about isolating two things enough to put them in separate AWS accounts, I'm willing to spend the $75 per month that it takes to have separate EKS clusters, too. (This opinion perhaps obviously doesn't fit will with hobby/side project budgets.)
> My current solution is to share my ~/aws/config with everyone so we all have the same profile names and scripts can work for everyone.
If you're on Mac/Linux you could have everyone use direnv. Add a .envrc file in each git repo (or your script's subdirectory) with `export AWS_PROFILE=profilename`. Now everyone is working with the same profile names without having to pass around config files.
That's gold advice. I wish AWS RAM supported more services (like AWS EKS).
A small complain: working with AWS SSO is a bit tedious. My current solution is to share my ~/aws/config with everyone so we all have the same profile names and scripts can work for everyone.