Hacker News new | past | comments | ask | show | jobs | submit login

Lots of weird comments here. I’ve done this solo with a 200tb bucket in about a day. A rando “aws s3 cp” obviously isn’t going to cut it.

They had the weird requirement where they also needed to empty the bucket, which makes things slightly more complex. You’d create a small 5 line lambda that would copy an object to the new bucket and then delete it. You’d then invoke this with a batch operation.

If you can relax the empty requirement, then you’d just use the batch operation to copy without needing a lambda and set a lifecycle policy to delete the objects. You’d need to do a bit of manual work to copy-delete objects created since the last inventory ran, but that is fairly simple.

You can use a tool like rclone but that’s still going to be quite slow compared to a batch operation, especially if you have a higher number of small files.

Alternatively, you would set up replication between the buckets and just handle the deletion within the window. S3 can delete 1,000 objects per call, meaning emptying a bucket is pretty fast.

Or, lastly, you’d use “s3 cp” with an appropriate number of threads on a machine within AWS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: