Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doubles on my Intel give correct answer too, but change the exponent used everywhere to 8 and you'll see the problem. Gustafson may have used different implementation of IEEE 754 that gave him different result.

There are problems with repeatability of float operations such as loss of precision when moving value from registers to memory and data alignment issues, but it seems that these can be avoided with proper compiler options, at the expense of speed.

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=323

https://software.intel.com/en-us/articles/run-to-run-reprodu...



There is no "different implementation of IEEE 754" that could give a different result in double precision for that example.

There are reasonable criticisms of IEEE 754; that's not the issue.


> There is no "different implementation of IEEE 754" that could give a different result in double precision for that example.

How do you know that?


IEEE 754 is a standard, and it requires a specific, bitwise-reproducible, answer for the computation in question. I'm the author of most of the floating-point tests in WebAssembly's conformance testsuite, which tests such things in practice across several hardware platforms.

One of the half-truths in the presentation (in the intro) is that IEEE 754 is a mixture of requirements and recommendations. IEEE 754 does have both requirements and recommendations, however what the presentation doesn't say is that, within a given format like double precision (aka binary64), the basic operations like add, subtract, multiply, divide, squareRoot, etc.) have exactly one possible result for any given input (except that NaNs may have some implementation-defined bits, though this is usually unimportant).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: