Ideally you have a reliable Change Data Capture (CDC) mechanism like a Binlog Reader. Debezium, for example, can write directly to a queue like Kafka. A Kafka consumer picks up the events and writes to your secondary datastore. Something like that can probably handle all of your events without bucketing them, but if you want to cut down the number of messages written to the queue you can add that logic into the Binlog Reader so it emits a burst every 5 seconds or so. During those 5 seconds it buffers the messages in the processes memory or externally in something like Redis using a key so only the latest message is stored for a given record.