Yes, we have plans to support PostgreSQL as a storage backend. At the moment, queue tuning parameters are not exposed as configuration parameters, they're based off a shared queue and we expose prometheus metrics over the shared queue. What use-cases do you have for tuning the queue?
I might not have enough context, but is it possible based on your architecture the shared queue could back up if specific destination endpoints aren't accessible when workers are processing the queue (I have seen this happen before in an adjacent space product)? If this isn't the case, tuning is probably not necessary if temporarily undeliverable messages can be skipped and retried later without impacting queue throughput. Being able to select, group and pause processing of queued messages in an ad hoc fashion (tuning was likely the wrong word, my apologies), I've found, is a powerful feature of webhook engines.
So essentially Convoy's queue mechanism works as a delayed job queue (basically how retries in webhooks work e.g. Stripe will send the event to you every hour for the next 72 hours). When the endpoint is unaccessible it backs up and schedules the retries at a future time on the queue.
The implementation under the hood is a combination of redis zset + redis streams. :) When it maxs out retries the events are still stored in the DB and you can batch retry them once the endpoints become accessible.
Yes, we have plans to support PostgreSQL as a storage backend. At the moment, queue tuning parameters are not exposed as configuration parameters, they're based off a shared queue and we expose prometheus metrics over the shared queue. What use-cases do you have for tuning the queue?