Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love this approach, although I'm yet to work anywhere that does this.

I guess the million (thousand?) dollar questios now become where do you draw the boundary across accounts? Presumably there are many bad ways to slice accounts up. And what happens when accounts do need to communicate?

I can imagine three major scenarios for cross account permissions:

* Cross account iam policies (painful in my experience) * Adding trust policies to enable cross account assuming roles (better but limited) * Limiting cross account policies to specific easy to configure services. E.g. S3 (best option, I've seen but maybe too limited)

Would love to hear the author's view.



> Would love to hear the author's view.

Instead you can hear AWS's view, which is to have one account per stage per region per service.

I can't find a source but I work for Amazon and this is what was recommended to us by ProServe (the contracting branch of AWS) when we talked with them.

I think it's idiotic though (because regions are 100% separated within an account, and it would easily triple the number of accounts to manage), and so did my team, so we stuck with one account per stage per service.

That said, cross account permissions is really not an issue, it's very easy and straightforward to setup. You also should not need it in 90% of the cases if your application is properly split with the right ownership for each microservice.

For my current team we manage probably more than a thousand AWS accounts, and permissions are never an issue. Neither is anything else actually. We aggregate metrics in a single account for the stuff that needs to be aggregated, we have small CLI scripts that automate tedious steps like requesting limit increases, etc.


One example of why you might want one account per region:

You regionalize data (eg, US data in the US, EU data in the EU) and you want to be able to show (and enforce) separation for compliance or security reasons. You might even take that further, and have multiple accounts per region to create “cells” that correspond to segments of that region.

Disclosure: I worked on Amazons tax pipelines.


> I think it's idiotic though (because regions are 100% separated within an account, and it would easily triple the number of accounts to manage), and so did my team, so we stuck with one account per stage per service.

The benefit to one-region-per-account is for any tool that needs to do broad scanning of an account. Running something like awsnuke is much faster if you know that resources were only ever created in one region, and you know you know this because you have an SCP restricting the account to that one region.

If you have an application that is intentionally multi-region though, sure, feel free to violate that principle if it simplifies management for the application team; just still ensure you have the SCP in place to restrict to only those regions which are needed.


There are global quotas per account that can sneak up on you if you have too many services running in the same account albeit in different regions. DynamoDB total read and write capacity comes to mind.


> Instead you can hear AWS's view, which is to have one account per stage per region per service.

This is exactly correct.

> I think it's idiotic though (because regions are 100% separated within an account

But still bound to the same service limits, no?


> But still bound to the same service limits, no?

I'm not sure what you are referring to (pricing or service quotas?, I will assume the latter), but I think it depends on the service. For some setting the service quota will be per region and for others it will be global.

One I know that is for sure regional is the limit reserved concurrent executions of Lambda functions.


Service limits for regional services like EC2 are regional.

Service limits for global services like Organizations appear to only be manageable from us-east-1.


I'd say application boundaries at a high-level. Your various environments should be completely isolated. For instance, you can split web services at the load balancer level.

Application boundaries tend to mirror org structure so that allows you to scope down team access.

To get a better idea of your architecture, you can create a dependency diagram and look for clusters of things.

As for connectivity, you could go over the internet, use VPC peering, use Transit Gateways, PrivateLinks, or follow a hub/spoke network architecture with a "network account"

If you're using managed services, you can also use things like SNS and SQS to create shareable communication channels for other accounts to use.


Author here: I have settled on account per service per environment with a couple of exceptions.

Sometimes I run multiple services in one account if they’re so tightly coupled as to be useless as a group if any one is down. (This has practically come up when two services are codesigned to multiplex TCP connections to support tens of millions of clients.)

Sometimes I run a single stateless production service in two accounts and route 10% of traffic to the canary account and 90% to the other one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: