Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Comparing to the speed of a direct boolean check is a great way to sensationalize really small numbers.

Nobody's real-world code is being slowed down by 500% because all real world code is doing much more than just checking errors. All I see from these results is a 15-16ns cost to using errors.Is versus an additional boolean.

Even the examples ("GetValue") hint at an extremely common use case: reads from a data store. A case where single-digit milliseconds is considered "all good performance-wise" for the most common SQL databases, clocking in at 100000x the time scale of errors.Is.



I thought this as well, but I also think it depends on how you’ve structured your code. If errors aren’t used by the algorithm (they’re just used for early exit) then yes it won’t make a big difference, but if your algorithm is handling errors nearly as frequently as the happy path then you’ll see a difference using the boolean.


Yeah, of course if you're just running through an in-memory slice and doing a little arithmetic on each item, sentinel errors and errors.Is might dominate your runtime. But the dominant use-cases (e.g. in the standard library) are syscalls, filesystem interactions, network APIs - you know, stuff that takes real time.

This reminds me of "latency numbers every programmer should know". Work with the standard library and other well-conceived go projects and you gain the intuition that error handling and bit-twiddling arithmetic don't belong together. That's the real story here, and OP's article is way wide of the mark.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: