Hacker News new | past | comments | ask | show | jobs | submit login

It's safe to assume this is going to be just like Glacier and S3/Google Storage, ie: unlimited.

Also retrieval speeds increase with your data set size: Note: You should expect 4 MB/s of throughput per TB of data stored as Nearline Storage. This throughput scales linearly with increased storage consumption. For example, storing 3 TB of data would guarantee 12 MB/s of throughput, while storing 100 TB of data would provide users with 400 MB/s of throughput.




I was just interested as to what these storage systems are capable of supporting. While I'm confident they could all store 10 TB of Data (That's just barely Tier-2 of 6 for Amazon), I'm wondering if they have the back end capability to store 10 PB of data.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: