Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well kinda, the size of the mantissa is certainly chosen to be large enough to give the precision scientific computing would "typically" need, but that's considering trade-offs and just being vaguely good enough for most cases. Sometimes we use 80-bit extended precision floating point for example.


I thought 80-bit floats have been mostly deprecated due to getting different results depending on whether the compiler put the variables in main memory or not?


80-bits have been deprecated because after Pentium Pro (end of 1995), the last Intel CPU in which the operations with 80-bit numbers have been improved, Intel has decided that the 8087 instruction set must be replaced (mainly because it used a single stack of registers, which is an organization incompatible with modern CPUs having multiple pipelined functional units, which need independent instructions, to be executed concurrently, while all instructions using the same stack are dependent) and that their future instruction set should not support more than double precision.

From 1997 until 2000, Intel has introduced each year some instruction set features aimed at replacing the 8087 80-bit ISA and this process was completed at the end of 2000, with the introduction of Pentium 4.

Since the end of 2000, more than 21 years ago, the use of the 80-bit floating-point numbers has been deprecated for all Intel CPUs (and since 2003, also for the AMD CPUs).

The modern Intel and AMD CPUs still implement the 8087 ISA, only for compatibility with the old programs written before 2000, but they make no effort to make them run with a performance similar to that obtained when using modern instruction sets, like AVX-512 or AVX.

If there are modern compilers which in 2022 still emit 8087 instructions to handle values declared as "long double" (unless specifically targeted to a pre-2000 32-bit CPU, e.g. Pentium Pro), I consider that as a serious bug.

A compiler should either implement "long double" as the same as "double", which is allowed, but lazy and ugly, or it should implement the "long double" operations by calls to functions from a library implementing operations with either double-double or quadruple precision numbers, exactly how many compilers implement operations with 128-bit integers on all CPUs or with 64-bit integers on 32-bit CPUs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: