The best thing about using PostgreSQL for a queue is that you can benefit from transactions: only queue a job if the related data is 100% guaranteed to have been written to the database, in such a way that it's not possible for the queue entry not to be written.
On a quick read this seems like another name for Change Data Capture. In general the pattern works better if you can integrate it with the database's transaction log, so then you can't accidentally forget to publish something.
CDC is one of the mechanisms you can use to implement this if the volume of message is high, but the idea is to decouple your business transactions from sending out notifications and do the latter asynchronously.
Also you can benefit from being able to use all of SQL to manage the queue.
I built a system ages ago that had modest queue needs.. maybe 100 jobs a day. It involved syncing changes in the local database with external devices. Many changes would ultimately update the same device, and making the fewest number of updates was important.
The system used an extremely simple schema: A table with something like [job_id, device, start_after, time_started, time_finished]
When queueing a job for $device, do an upsert to either insert a new record, or bump up the start_after of a not yet started job to now+5 minutes. When looking for a job to run, ignore anything with a start_after in the future.
As edits were made, it would create a single job for each device that would run 5 minutes after the last change was made.
I know a lot of queueing systems have the concept of a delayed job, but I haven't come across any that had the concept of delayed jobs+dedup/coalescence.
I also used a similar pattern in reverse to process incoming messages that had to be partially processed in-order: one simple worker that dumped the queue to a Postgres table at full speed and N workers that used a slightly more complicated SELECT ... FOR UPDATE SKIP LOCKED query than usual to pick up the next message that could be safely processed. Again, using a single database made transactions very simple: only the message-dumping worker had to worry about processing messages exactly once (which it did via being conservative with acking and utilizing INSERT ... ON CONFLICT DO NOTHING), every other worker could just implicitly rely on database transactions).
This is so important if you want to avoid incredibly gnarly race conditions. In particular for us: jobs being run even before the transaction has been fully committed to the database.
We utilise a decorator for our job addition to external queues, such that the function that does the addition gets attached to Django's "on transaction commit" signal and thus don't actually get run until the outer database transaction for that request has been committed.
That is the simple, but unreliable way to fix the issue. If your python process crashes or is killed between sending the commit and enqueueing the job, the job will never be enqueued.
A possible solution to this is to use a "transactional outbox" pattern, but that has many of the same drawbacks of using postgres as a queue.
The original messaging middleware on which RabbitMQ (and other AMQP-capable message brokers) are based is IBM's WebSphere MQ fka MQseries. MQ can be used in a transactional fashion depending on whether messages are stored or can be timed out, and other QoS. Within CICS, MQ can also participate in distributed transactions along with database transactions, and this would be a typical way to use transactional message processing. And X/Open distributed transactions were one of the earliest open system/Unix standards from around 1983, so have been a well understood architecture outside mainframes as well.
That it's beneficial to use Postgres messaging (or Oracle AQ or whatever) for its transactional semantics is kind of accidental and a consequence of folks not wanting to bother with dtx. Even though databases are accessed via networks, truly scalable work distribution can't be achieved using SQL, much less with SQLite. Or in other words, if you're using messaging queues in databases, you could use tables and row locks directly just as well.
Brandur wrote a great piece about a related pattern here: https://brandur.org/job-drain
He recommends using a transactional "staging" queue in your database which is then written out to your actual queue by a separate process.