Literally did something similar just last week, was looking for a good redis gui on windows and couldn't find one that clicked (closest I got was Another Redis Desktop Manager but it sorted my keys like 1, 11, 2 etc) so turned to sqlite and implemented this. Also added a "sync to disk" method so I get both the benefits of ram and persistence, worked out great since the data I'm getting over tcp has a sequence number so in case of any errors I resume from the last sequence number in the db. Thinking of fully committing and moving some stuff from the language like decoding the raw bytes to build a json object to an extension.
What's it give you above just holding a big object in memory? I guess partial serialization is something, so updates are stored on disk... But then why not just store a json file per key on disk? It's not like the serialized blobs allow you to have indexes or a particularly efficient full text search, so why bother with the SQL statements at all?
Thanks for the help. After upgrading my instance, all the query times dropped dramatically. The thing is, I did not even upgrade the instance that much (micro -> medium).
I'm guessing some important bottleneck was being hit, but I have no idea what it was (maybe 1GB RAM was causing spillover to disk in a query or something? like that?).
It seems important to understand what bottleneck I hit -- but frankly, I have no idea.
It seems like the modern solution is to use something like temporal.io -- (related to windmill). Alas, surely people have been solving this problem for ages now without temporal.io