Hacker News new | past | comments | ask | show | jobs | submit login
AWS S3 beginning to apply 2 security best practices all new buckets by default (amazon.com)
54 points by nixcraft on April 9, 2023 | hide | past | favorite | 35 comments



They also default to encrypting objects now.

https://aws.amazon.com/blogs/aws/amazon-s3-encrypts-new-obje...


I'm a Platform Engineer but I've never actually understood (or really looked into) what they mean by encryption in this context.

I checked that link and they call it "server-side encryption" -- do they mean encryption at rest? If so, it seems terrible that they would allow data to live unencrypted on their non-volatile storage, why do they even give the customer that option?

Other than for very specific kinds of customers that want to supply their own encryption keys, why should it even be their concern? The whole point of cloud is to take the hardware away. Encryption is cheap, just do it.


Yes it is just encryption at rest and their API access mechanisms automatically apply the encrypt or decrypt action on your behalf. If your access is leaky encryption doesnt buy you anything.

I take special offense to teams who store Terraform state in S3 and claim "it's encrypted" when it is so easy for other users in the same AWS account can easily access the buckets contents.

GCS is slightly more secure but you really need client side encryption to be safest.


May I ask what is the proper way to store Terraform state. We are currently testing out Terraform at my job and it just uses a s3 bucket with exception turned on. Thanks


Oh we do, too. My beef with it is how easy it is for a user on the account to go and read your state due to a lax IAM or bucket policy.

My advice: check and make sure your bucket policy you use for the state has an explicit deny (resource *, principal *) and then you explicitly allow only the user / role that requires access to the TF state.

Things to watch out for are providers that store sensitive info in your state. For example, if you use Vault and you read a secret out of Vault with Terraform then the secret will be saved in your Terraform state which, painting with broad strokes, largely invalidates the purpose of Vault. Lots of providers do this, some are getting better about not requiring sensitive info to be saved in the state or included in the config.


I wrote a blog detailing what this change means on S3 ACLs and Block Public Access on by default: https://www.cloudquery.io/blog/finding-enabled-s3-acls-and-d...


What is the alternative to ACLs? Or is reading from users / roles in the same project supported by default, provided the user / role has the required permissions?


Policies are the alternative to ACLs. You craft your IAM user/role policies or your bucket's resource policy to allow and deny the appropriate access. ACLs are a totally separate subsystem of S3 that should be been deprecated long ago.


Thankfully I’ve never been charged with keeping any serious PII in an S3 bucket, because the permissions have always worried me, and I’d probably be considered an expert with IAM policies.

Thankfully with S3 it’s getting easier and easier to do the right thing. I’m glad for the S3 team moving in this direction.


What is funny is that although they have been phasing out S3 ACLs for years, they are still using it for their own products. For example, Control Tower uses S3 ACLs to secure access to S3 buckets with logs.



Given how many major data breaches have been the result of unintentional public access to orgaanizations' data on S3, I almost think Amazon should remove all public access to buckets and objects from the entire S3 product.

Instead make all access to S3 be through credentialed access or signed URLs. If users need to expose an entire bucket to the public Internet, make them go to the effort to put a service in front of the bucket.

Yes, this would be a huge change. But playing with the default values for S3 seems like too little, too late.


Please no. Having S3 (with or without cloud front) being able to be your 'serverless' webserver is one of the very strong points of the service.

Before putting important info on the machines of a cloud provider, hire people that know how the service works. Yes, this costs money, but the 'solution' you are providing in your comment as well, and can be equally open and leaking. Specially if the current team can not secure a bucket, they will not be able to secure a proxy in front of it.


You could still do that with Cloudfront. It's simple to configure but makes it a deliberate action and not something that could be done accidentally.


> You could still do that with Cloudfront.

You can also expect users to audit what they do with S3, which is something they should already be doing, and that does not force anyone to suddenly refactor their whole deployment workflow.

It makes absolutely no sense at all to argue that public access to S3 should be shut off and those who use it should start paying for CloudFront just because you feel that its too hard to check if your bucket is set to grant public access.

Also, there's already AWS Config for those who feel they need guardrails to enforce a specific configuration. AWS Config even let's you put together a Lambda to switch off public access to a bucket if some sloppy fingers indavertentlh set it on.

https://aws.amazon.com/config/

It boggles the mind how anyone could suggest with a straight face that your personal usecases should be automated away even though it screws over everyone else using it.


> It boggles the mind how anyone could suggest with a straight face that your personal usecases should be automated away even though it screws over everyone else using it.

Should the orgs that have exposed private data in S3 to the public internet have been using AWS Config and AWS CloudTrail and properly scoped IAM roles and eaten their broccoli and flossed their teeth every night? Yes, they should have. But they did not.

These problems continue to happen, even as AWS adds even more warning boxes that say "Do you really want to open this bucket/object to the public internet? Type 'SHOOT ME' to continue..."

In the grand scheme of things, I imagine that those data exfiltrations from corporate and government customers is a larger net negative for S3 than the positive of the fact that I can wire up my JAM stack static site to an S3 bucket and pay $0.0001 per month to host it on the public internet (rather than paying $0.0002 per month for S3 + CloudFront or S3 + Lambda).

In any case, I'm not the AWS general manager of S3, so there's no need to worry about my comments on this website affecting anyone's use of S3 :)


> Yes, they should have. But they did not.

That's their problem.

I fail to see how anyone could think that an intelligent answer to that issue would be "let's prevent everyone from serving static content from S3, specially those serving their sites exactly like it's advertised in the S3 how to guides", specially considering that solutions like AWS Config were simply ignored.

> These problems continue to happen

I want to continue to serve my sites from S3.

How is that my problem? Why should anyone else's oversight or even outright incompetence should stop me from using things right?

> In the grand scheme of things, I imagine that those data exfiltrations from corporate and government customers is a larger net negative for S3 (...)

Not my problem, and not S3's problem. Why are you pretending it is?

AWS is very clear in their shared responsibilities model. You, as the customer, need to have your shit together. If you do not, that's your problem. Not AWS's, and specially not mine.

Read the manual. Learn from your mistakes. Don't drag everyone else down just because you failed to read the freaking intro tutorials. And own up to your mistakes.


That's quite the rebuttal to an argument I didn't make.

I was neither supporting nor opposing the idea of disabling the ability to use S3 publicly. While I agree with the new default I would not support removal of the feature.


Have you seen what you have to do to make a bucket public nowadays? It's not as if you do it by accident (there's a couple of warnings etc) Yes, buckets created with the API dont have public access blocked, so if someone that does not fully grasp cloud security is given access to create buckets ... Luckily this is now fixed with the latest round of changes done by AWS.


Doesn't Cloudfront add extra fees on top of it?


You pay egress fees either way but the Cloudfront fees are lower.

I believe Cloudfront also still has a free tier which cuts out egress fees on the first X GB of data.


most of the time it doesn't. Their free tier is very generous. You get the first 2TB of traffic for free (vs 100GB of free traffic if you use only S3). traffic above this is also cheaper than the price you pay for traffic directly from S3


AWS already makes it a massive headache with all sorts of warning flags if you try to make an object public, and this change will make it even more obvious.

S3 is great for hosting static websites for pennies a month and zero work required to configure nginx etc. There's a huge use case for allowing public access to data in S3 buckets.


Please don't do this. I've used S3 to host static web pages and adding an additional service would add complexity and cost. Also, I'll point out that even a couple of years ago, AWS made it rather uncomfortable for me to expose pages to the public. IIRC, each time I changed the html, I needed to also remind AWS that I wanted the file public.


Just put CloudFront in front of it. Then you get to use your own domain.


Using your own domain doesn't require CloudFront [0].

[0] https://docs.aws.amazon.com/AmazonS3/latest/userguide/websit...


It does if you want HTTPS.

> Amazon S3 does not support HTTPS access to the website. If you want to use HTTPS, you can use Amazon CloudFront to serve a static website hosted on Amazon S3.


You dont need Cloudfront to use your own domain. You can do it with buckets alone too, but they have to follow a clear naming policy.


You do if you want HTTPS...

> Amazon S3 does not support HTTPS access to the website. If you want to use HTTPS, you can use Amazon CloudFront to serve a static website hosted on Amazon S3.


Have you looked at R2? You don’t need to put a full service in front. You can set up any custom domain through Cloudflare you want in front of it and then manage access policies through the zone and Access. It would be nice for the default to be when you make it public, all objects are still inaccessible until you set up explicit policies (ie allow all access or just to specific objects). That may be too onerous though as the most common use case appears to be making entire buckets public. You want it easy to follow good security policies without making it a hoop jumping exercise. It’s a difficult problem (not sure why it took them so long to make this particular change though).

You can of course make things public via a secret managed r2.dev URL (it’s a new UUID every time you make a public bucket) for testing and comparing against access via your zone if debugging. But we discourage it slightly in the first place (if I recall correctly it’s a more hidden/demphasized option in the UI flow for setting up a public access) as it’s really only intended for testing as it’s a managed service and we may make functional changes to it’s behavior at any time.

I’m not trying to crap on S3 or anything. They have a much older codebase and larger number of customers to deal with. I’m just highlighting you can recognize that public buckets are an extremely common use case and it’s possible to do better I think without adding a lot of complexity.

Disclaimer: worked on R2


> If users need to expose an entire bucket to the public Internet, make them go to the effort to put a service in front of the bucket.

This would defeat the purpose of using S3 to serve static content, which is one of the main usecases for S3.


Not needing to run a separate service is pretty much the entire reason I use S3. I've got some low-traffic web sites on there that "just work". I don't have to screw around with keeping Apache or Express or some other web server up to date or otherwise managing them, nor do I have to worry about the sites either a) falling over from an unusual traffic spike or b) massively overprovisioning them to prevent them from falling over from an unusual traffic spike.


S3 seems to be one of the most entry level AWS products. I see non-engineers using it all the time.

I think this screams, “these people need to be saved from themselves!” But it also suggests why S3 retains so many features that are more “properly” done through a suite of other cloud products.


The initial default behavior for S3 was public access. Now requiring already created buckets to have default private-read will probably - almost literally and with no hint of sarcasm - break the internet.

New buckets, sure whatever, but dont touch the old stuff.


irc -> mastodon

pop -> pop tls

ldap -> ldaps

ftp -> scp

http -> https (ssl -> tls)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: