This is what’s called a fixed point decimal type. If you need variable precision, then a decimal type might be a good idea, but fixed point removes a lot of potential foot guns if the constraints work for you.
I meant fixed point decimal type (like C#) 128 bit. I don't understand why the parent commenter (top voted comment?) used unsigned integers to track individual cents. Why roll your own decimal type?
Using arbitrary precision doesn't make sense if the data needs to be stored in a database (for most situations at least). Regardless, infinite precision is magical thinking anyway: try adding Pi to your bank account without loss of precision.
the C# decimal type is not fixed point, its a floating point implementation, but just uses a base 10 exponent instead of a base 2 one like IEE754 floats.
Fixed point is a general technique that is commonly done with machine integers when the necessary precision is known at compile time. It is frequently used on embedded devices that don't have a floating point unit to avoid slow software based floating point implementations. Limiting the precision to $0.01 makes sense if you only do addition or subtraction. Precision of $0.001 (Tenths of a cent also called mils) may be necessary when calculating taxes or applying other percentages although this is typically called out in the relevant laws or regulations.
Fun fact there is a decimal type on some hardware. I believe Power PC, and presumably mainframes. You can actually use it from C although it’s a software implementation on most hardware. IEEE754-2008 if you are curious.
It’s very cool, but not present on most hardware. Fixed point is a lot simpler though if you are dealing with something with inherent granularity like currency