Decimal representations have one really minor advantage: they behave more the way people expect with respect to which numbers are exactly representable especially in the size ranges of every-day numbers, since they exhibit mostly the same behavior as calculators do.
Plenty of people find that 0.2 not being exactly representable in binary floating point is not intuitive.
IEEE-754 allows "decimal floats". Basically, instead of 2^exponent, it's 10^exponent. `decimal32`[0], `decimal64`[1], and `decimal128`[2] are defined. But I'm not aware of any popular system that implements them.
Plenty of people find that 0.2 not being exactly representable in binary floating point is not intuitive.