My thinking here is that had the original devs had a better grasp on redis and postgres, they never would have tried using Kafka in the first place. I can't imagine the problem ever requiring the throughput of Kafka, and there would likely be several other scaling issues in the way of utilizing Kafka to its full potential anyway.
I'm pretty sure a redis-based queue like Bull (https://github.com/OptimalBits/bull) would have sufficed for queuing message responses directly on the server (or multiple instances of the server), and while Kafka works fine for long term storage of logs, our use case for the data makes it so it would be far better stored directly in postgres.
Postgres is apparently also a decent pub/sub solution, though I'm not sure if it's superior to Kafka in this case.
The worst part is that the alternative architecture using a redis queue and postgres for message history is very simple, easy to maintain, benefits from the ability to normalize data, and is comfortably boring. Kafka is not that. It's a miserable beast sometimes, and it presents hurdles all the time for many of us. It's good at what it does and people should consider it (or Pulsar) if their problem requires a high throughput message broker. For everyone else, it's a really risky investment for small or no returns over alternatives. It's the worst decision the developers made in this application by a wide margin.