Hacker News new | past | comments | ask | show | jobs | submit login

Nope, it makes code more complex. For example, if the customer makes payments in 2 installs and you want to check if they have payed the full amount, you can't do "if (a + b >= c)" if you use floating point. You'd have to use something like "if (a + b >= c - 0.01)".

A normal double (IEEE 64) has only ~15 digits of precision, so when the amounts grow large, you loose the precision for the cents.

Example (in Java, which uses IEEE): double d = 1e9; System.err.println("d: " + d); d += 0.01; System.err.println("d: " + d); d -= 1e9; System.err.println("d: " + d); d -= 0.01; System.err.println("d: " + d);

What is 'd' at the end? Not zero. This adds a gazilion weird cases in the code that you have to handle.

Some countries have laws are very strict about how rounding should be performed and that all amounts must be an integral number of 'cents', so using doubles are completely out of the question in those cases.




Ah, completely true.


How else but for imprecision would you channel the half cents to rip off your soul deadening job?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: