That's really nice of the author and awesome to hear. Would love to see TigerStyle be published in a similar format and expanded upon with more detail be great for referencing with my colleagues. Would love to adopt it on projects or some of the rules at least
You would use TigerBeetle for everything: not only the final purchase transaction, but the shopping cart process, inventory management and queuing/reserving.
In other words, to count not only the money changing hands, but also the corresponding goods/services being exchanged.
These are all transactions: goods/services and the corresponding money.
Yeah it was back in February in your community Slack, I did receive a fairly thorough response from you and others about it. However then there were no technical critiques of the Go benchmarking code, just how our PostgreSQL comparison would fall short in real OLTP workloads (which is fair).
I don’t think we reviewed your Go benchmarking code at the time—and that there were no technical critiques probably should not have been taken as explicit sign off.
IIRC we were more concerned at the deeper conceptual misunderstanding, that one could “roll your own” TB over PG with safety/performance parity, and that this would somehow be better than just using open source TB, hence the discussion focused on that.
Yes, write last to the system of record, read first from the system of record. Or in other words, commit to the system of record, and then read from the system of record to see what's committed.
(This is similar also to how chain replication preserves consistency.)
I think the drop in non-batch performance was more a function of the PoC than of TB. Would love to see what our team could do for you here! Feel free to reach out to peter@tigerbeetle.com
> you risk getting bottlenecked by all the attributes you'll need to wrap around the transaction in PostgreSQL to make it work in a real application.
The performance killer is contention, not writing any associated KV data—KV stores scale well!
But you do need to preserve a clean separation of concerns in your architecture. Strings in your general-purpose DBMS as "system of reference" (control plane). Integers in your transaction processing DBMS as "system of record" (data plane).
Dominik Tornow wrote a great blog post on how to get this right (and let us know if our team can accelerate you on this!):
> We didn't observe any automatic batching when testing Tigerbeetle with their Go client.
This is not accurate. All TigerBeetle's clients also auto batch under the hood, which you can verify from the docs [0] and the source [1], provided your application has at least some concurrency.
> I think we initiated a new Go client for every new transaction when benchmarking
The docs are careful to warn that you shouldn't be throwing away your client like this after each request:
The TigerBeetle client should be shared across threads (or tasks, depending on your paradigm), since it automatically groups together batches of small sizes into one request. Since TigerBeetle clients can have at most one in-flight request, the client accumulates smaller batches together while waiting for a reply to the last request.
Again, I would double check that your architecture is not accidentally serializing everything. You should be running multiple gateways and they should each be able to handle concurrent user requests. The gold standard to aim for here is a stateless layer of API servers around TigerBeetle, and then you should be able to push pretty good load.