I wrote a tool to handle micro blobs specifically because we were being heavily rate limited by S3 for both writes and reads. We got about 3k/s per bucket before S3 rate limiting started kicking in hard.
Granted we also used said tool to bundle objects together in a way that required sezo state to track so that we could fetch them as needed cheaply and efficiently so it wasn't a pure S3 issue.
Interesting, thanks! PUT is advertised at 3500/s, so with a combo load, you were at least within range of advertised limits. I have not approached that scale so didn't know, it was a real question!
Yeah I was processing a bunch of iceberg catalog data, it was pretty trivial to get to this point on both PUTs and GETs with our data volume, I was doing 400,000 requests/m and of course my testing was writing to one prefix :)
Granted we also used said tool to bundle objects together in a way that required sezo state to track so that we could fetch them as needed cheaply and efficiently so it wasn't a pure S3 issue.