It's safe to assume this is going to be just like Glacier and S3/Google Storage, ie: unlimited.
Also retrieval speeds increase with your data set size: Note: You should expect 4 MB/s of throughput per TB of data stored as Nearline Storage. This throughput scales linearly with increased storage consumption. For example, storing 3 TB of data would guarantee 12 MB/s of throughput, while storing 100 TB of data would provide users with 400 MB/s of throughput.
I was just interested as to what these storage systems are capable of supporting. While I'm confident they could all store 10 TB of Data (That's just barely Tier-2 of 6 for Amazon), I'm wondering if they have the back end capability to store 10 PB of data.
Also retrieval speeds increase with your data set size: Note: You should expect 4 MB/s of throughput per TB of data stored as Nearline Storage. This throughput scales linearly with increased storage consumption. For example, storing 3 TB of data would guarantee 12 MB/s of throughput, while storing 100 TB of data would provide users with 400 MB/s of throughput.