One of the biggest thing that keeping audit records in your DB gives you is transactionality around your audit logs. Sending audit events to an external system (quite often) loses this, and the resources to address this before you have to are way larger than a slightly larger AWS/GCP/Azure/<insert-computer-provider-here> bill.
We're implementing something similar to what OP describes, but we'll keep the "queue" in the DB in order to insert the application audit event in the same transaction as the data change. A background process then uploads to secondary storage.
We won't have billions of rows though, so once uploaded to secondary storage we'll just clear the blob field and set a "processed" flag.
This way we can find all the relevant keys for a given order, invoice etc quickly based on a partial key search in the database, and transparently fetch from either db directly or secondary storage as needed.