Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't like the divisions, though, and all the analysis that comes with it.

A way that is intuitively more robust would be to generate 23 or 52 random bits from an integer RNG for the mantissa (given you trust you integer RNG is unbiased). For example, take a random uint64_t, clear the sign bit, rewrite the exponent with a predefined value, and let the mantissa be untouched. That will give you a random floating point number in the range between 0.5-1.0, 1.0-2.0, or 2.0-4.0 etc, depending on the exponent you choose.

You can trust that the value is entirely random to the precision offered by the mantissa's size. Then you can control the accumulation of error by what you choose to do with the random number.

You still can't get to numbers that aren't representable by floating point, of course, but everything will start from the most random and unbiased value you can think of. You might have to introduce some error by subtracting the lower bound and multiplying appropriately to get to your desired range. Or, if you have the luxury to choose your desired range appropriately, you might be able to use the random value directly.



Hitting each representable floating point number with an equal probability does not generate an approximately uniform distribution of [0,1). For a sufficiently small positive number, eps, you have to have equal probabilities of numbers falling in [0,eps) and [0.5,0.5+eps) for a uniform distribution, and using random bits would fail this requirement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: