Hacker News new | past | comments | ask | show | jobs | submit login

Glacier have additional fees depending on how fast you retrieve the data. Honestly I'm not sure what they are because I've never had to use it, but my impression is that these fees could be quite high.



They can be. It's possible to set a retrieval policy on your bucket either at free tier, or a fixed price point so you can control the cost of your final bill if you want to retrieve faster than the free tier


At least with Glacier I get the option of retrieving faster. Nearline is going to take more than 3 DAYS to retrieve my data if I only have 1TB stored (3MB/s).


Let's assume that with Glacier you can retrieve your 1000 GB in 4 hours instead of 3 days.

You will be surprised by a retrieval fee of 1000 / 4 * 720 * 0.01 = $1,800.

If you want to get your 1000 GB for free, you'd have to request no more than 69 MB per hour. It will take you about 20 months to get your data!

With Glacier, limit retrievals to less than 5% per month / 30 days / 24 hours = 0.007% of total data stored per hour to avoid these ugly surprises.

Source: http://aws.amazon.com/glacier/faqs/#How_much_data_can_I_retr...


For many people waiting 3 days is more attractive than paying the ~$1000 that it would cost to retrieve it in one day from Glacier.


For disaster recovery 3 days might mean the end of your business.


For disaster recovery $1000 is peanuts, that's a company thing, GP mentioned 'people', not companies, different use-case.


I wrote a backup program that uses Glacier, and the retrieval policies are nearly impossible to manage and explain. But the thing I really don't get is that you can use Amazon Import to read a hard drive into Glacier, but you can't use Amazon Export to get a drive full of data back out. You can with S3.

In a disaster situation, a company is going to want hard drives sent to them next day. As others have mentioned, this isn't a money thing, it's a time issue. But it isn't available with Glacier (probably not with Google either...)


If you're willing to pay more for faster retrieval I guess you could store e.g. 5TB of random data to get a fast "base speed"- cutting those 3 days down to 12 hours.


Or use Nearline for archival data as well as disaster recovery. You won't need to transfer the archival data for normal disaster recovery scenarios, so it'll be -- for that scenario -- extra data that boosts your speed, but it's still, on its own, a useful use of the service.


That would be more expensive than just storing 1TB with the normal, full speed Cloud Storage service.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: