Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is 50ms. This is 200ms. doesn't quite work as well. Depending on the situation there has to be some theshhold of diminishing returns.


If the request you are talking about actually gets executed once (maybe it gets cached or something), then you shouldn't be pitching the time difference anyway.

If it gets executed 100s of times in a day per person, you can say this is 50 ms * 100 = 5s, 200 ms * 100 = 20s. And that's just per user.


Yeah, actually doing this math tends to surprise people. We had an automated background process that took ~2s to run, which doesn't sound bad at all considering it includes an API call over the internet. But multiplying it out to the number of backlog items we actually had at the time, 30-40 hours doesn't sound so good.


You can demonstrate that scale of difference using audio/video sync.


Seinfeld describes the difference between first place and second place: https://youtu.be/xK9rbwM3omA?t=65


Fair enough. Unless you're talking about lags for automated trading systems/ algos where even that single "I really can't measure this" sec difference counts.


I mean, sure, the precision matters for HFT but at the scale the point would be moot since the time is so minuscule. Unless you hyperscale it: “on 1,000,000 trades the 50ms difference becomes very pronounced and could cost us $z” or something of the sort. But I still think it loses “the spirit” of the method — best way I can phrase that.


you can use something like quickly flipping through some pages of what ever kind to find something, whet the page change is delayed by that much

if it's about computation you could make a bunch of objects load one after another




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: