Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yep, this is getting blown way out of proportion by all of these tiny scripts that just sit around connecting to themselves. Even pgbench is theoretical and intended for tuning; you're not going to hit your max tps in your Real Code that is doing Real Work.

In the real world, where code is doing real things besides just entering/exiting itself all day, I think it's going to be a stretch to see even a 5% performance impact, let alone 10%.



I think 5% is a reasonable guess for a database. Even a well-designed database does have to do a lot of IO, both network and disk. It's just not a "fixable" thing.

But overall, yeah.


The claim is that it's 2% to 5% in most general uses on systems that have PCID support. If that's the case then I'm willing to bet that databases on fast flash storage are lot more impacted then this and pure CPU bound tasks (such as encoding video) are less impacted.

The reality is that OLTP databases execution time is not dominated by CPU computation but instead of IO time. Most transactions in OLTP systems fetch a handful of tuples. Most time is dedicated to fetching the tuples (and maybe indices) from disk and then sending them over network.

New disk devices lowered the latency significantly while syscall time has barely gotten better.

So in OLTP databases I expect the impact to be closer to 10% to 15%. So up to 3x over the base case.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: