Hacker Newsnew | past | comments | ask | show | jobs | submit | jorangreef's commentslogin

And I suppose we did minimize the preprocessor somewhat with comptime… :D


Thank you Andrew, you always have our full support.


The author graciously gifted the domain to us and we’re literally days away from launching original TigerStyle here.


That's really nice of the author and awesome to hear. Would love to see TigerStyle be published in a similar format and expanded upon with more detail be great for referencing with my colleagues. Would love to adopt it on projects or some of the rules at least


Oh that's very nice of them.


You would use TigerBeetle for everything: not only the final purchase transaction, but the shopping cart process, inventory management and queuing/reserving.

In other words, to count not only the money changing hands, but also the corresponding goods/services being exchanged.

These are all transactions: goods/services and the corresponding money.


Appreciate your kind words, Kelsey!

I searched the recent history of our community Slack but it seems it may have been an older conversation.

We typically do code review work only for our customers so I’m not sure if there was some misunderstanding.

Perhaps the assumption that because we didn’t say anything when you pasted the code, therefore we must have reviewed the code?

Per my other comment, your benchmarking environment is also a factor. For example, were you running on EBS?

These are all things that our team would typically work with you on to accelerate you, so that you get it right the first time!


Yeah it was back in February in your community Slack, I did receive a fairly thorough response from you and others about it. However then there were no technical critiques of the Go benchmarking code, just how our PostgreSQL comparison would fall short in real OLTP workloads (which is fair).


Yes, thanks!

I don’t think we reviewed your Go benchmarking code at the time—and that there were no technical critiques probably should not have been taken as explicit sign off.

IIRC we were more concerned at the deeper conceptual misunderstanding, that one could “roll your own” TB over PG with safety/performance parity, and that this would somehow be better than just using open source TB, hence the discussion focused on that.


Of course, there’s nothing new except shining a spotlight (and coining the rule!).


Yes, write last to the system of record, read first from the system of record. Or in other words, commit to the system of record, and then read from the system of record to see what's committed.

(This is similar also to how chain replication preserves consistency.)


If you read first and write last isnt that the opposite of committing and then reading to see what is comitted?


I think the drop in non-batch performance was more a function of the PoC than of TB. Would love to see what our team could do for you here! Feel free to reach out to peter@tigerbeetle.com


> you risk getting bottlenecked by all the attributes you'll need to wrap around the transaction in PostgreSQL to make it work in a real application.

The performance killer is contention, not writing any associated KV data—KV stores scale well!

But you do need to preserve a clean separation of concerns in your architecture. Strings in your general-purpose DBMS as "system of reference" (control plane). Integers in your transaction processing DBMS as "system of record" (data plane).

Dominik Tornow wrote a great blog post on how to get this right (and let us know if our team can accelerate you on this!):

https://tigerbeetle.com/blog/2025-11-06-the-write-last-read-...


> We didn't observe any automatic batching when testing Tigerbeetle with their Go client.

This is not accurate. All TigerBeetle's clients also auto batch under the hood, which you can verify from the docs [0] and the source [1], provided your application has at least some concurrency.

> I think we initiated a new Go client for every new transaction when benchmarking

The docs are careful to warn that you shouldn't be throwing away your client like this after each request:

  The TigerBeetle client should be shared across threads (or tasks, depending on your paradigm), since it automatically groups together batches of small sizes into one request. Since TigerBeetle clients can have at most one in-flight request, the client accumulates smaller batches together while waiting for a reply to the last request.
Again, I would double check that your architecture is not accidentally serializing everything. You should be running multiple gateways and they should each be able to handle concurrent user requests. The gold standard to aim for here is a stateless layer of API servers around TigerBeetle, and then you should be able to push pretty good load.

[0] https://docs.tigerbeetle.com/coding/requests/#automatic-batc...

[1] The core batching logic powering all language clients: https://github.com/tigerbeetle/tigerbeetle/blob/main/src/cli...


Thanks for reaching out. I shared this benchmarking script with your team when we tested Tigerbeetle, but this is it again: https://gist.github.com/KelseyDH/c5cec31519f4420e195114dc9c8...

Was there something wrong with our test of the individual transactions in our Go script that caused the drop in transaction performance we observed?


Thanks Kelsey!

We’d love to roll up our sleeves and help you get it right. Please drop me an email.


So what was wrong with his isolated benchmark code that he shared here?


Not from Tigerbeetle, but having looked at his code this is what I saw https://news.ycombinator.com/item?id=45896559


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: