I abuse `Decimal` all the time in python. I usually make a little helper with a very short name like
def d(in: Any) -> Decimal
Then use that everywhere I expect a float (or just anywhere). Decimal's constructor is so forgiving allowing strings, ints, floats, other decimals it is so convenient to use. Of course there is the perf penalty but I always think the precision is worth the tradeoff.
Sorry, I am doing stuff in the function... here is one of the old ones from code I can share
def dec(value, prec=4):
"""Return the given value as a decimal rounded to the given precision."""
if value is None:
return Decimal(0)
value = Decimal(value)
ret = Decimal(str(round(value, prec)))
if ret.is_zero():
# this avoids stuff like Decimal('0E-8')
return Decimal(0)
return ret
It does, but those issues manifest themselves in ways that humans have been trained to operate.
"Round to nearest even" for binary floating point is weird to anyone who doesn't have a numerics background. "Round 0.5 up" is normal to humans because that's what most have been taught.
Future programming languages should probably default to Decimal Floating Point and allow people to opt-in to binary floating point on request.
> It does, but those issues manifest themselves in ways that humans have been trained to operate.
Not really. If you do computations like "convert Fahrenheit to Celsius" or "pay 6 days of interest at this APR" or a million other things, you run into the same basic faulty assumptions as ever