You can hit 40000-80000+writes/s with sqlite on a 10$ VPS just by batching transactions (i.e wrapping all inserts/updates in a single transaction every 100ms). This is easy to do at the application level, then you also avoid BUSY/LOCK.
I'd argue writes scale better wtih sqlite than postgresql.
With a single writer (as it the case with sqlite). You don't need transactions and rollbacks. As all writes happen in sequence.
Each batch item can be a combination of read/write/update that happen in sequence and therefore can give you the same semantics as a traditional transaction/rollback. eg:
- read -> Does the account have enough funds?
- write -> transfer 100$ from this user to another account
This is also much simpler to write than in other databases as you don't have to worry about n+1.
You don't need to rollback, because you have already checked the invariants and the system is a single writer.
Ah you're doing request response? sqlite/single writer fits much better with a CQRS still approach, so in my experience you move away from request/response to something push based. That being said even in a request/response model wrapping a request in a transaction is not a great idea. The minute you have anything that takes some time (slow third party) that transaction is going to sit open and cause trouble.
There are multiple simple ways of doing SQLite backups https://sqlite.org/lang_vacuum.html#vacuuminto https://sqlite.org/rsync.html - or just lock and copy the file.
If you need to scale enough that it is a concern, then its not a good fit for your use case.