I know the answer is "yes" because the majority tends to have a point.. but are there good reasons, beyond backwards compatibility, for languages to be using IEEE 754 floating point arithmetic nowadays rather than just storing decimals "precisely" (to a specific degree of resolution)? Or are any new languages eschewing IEEE 754 entirely? (I'm aware of BigDecimal, etc. but these still seem to be treated as a bonus feature rather than 'the way'.)
They should have convenient hardware support (vector extensions for example). I'm sure you could design floating-point decimals that work nicely with vector extensions (which usually do cover integers), but it would be a significant project.
If your language is going to plug into the numerical computing stack (BLAS+LAPACK then all the fun stuff built up on that) then it'll need to talk binary floats.
All the annoying stuff that numericists understand but would like to not mess around with, like rounding directions and denormals, are handled nicely with 754 floats.
These are all sort of backward compatibility/legacy issues in the sense that they are based on decisions made in the past, but the hardware and libraries aren't going anywhere I bet!
Also, note that IEEE 754 does define a decimal interchange format. I bet they aren't handled as nicely in hardware, though.
Of course, if you are calling BLAS/LAPACK, you are constrained to use floats, but the recommendation on DoubleFloats is clear: if you know you algorithms, use the increased precision only in the parts that matter
Floats have an enormous range, with fixed relative precision. Even a single-precision float can store numbers up to about 1.7e38 .
Now of course you pay for that by losing absolute precision, but chances are that if you’re working with numbers like 1e20 you don’t much care about anything after the decimal point.