Any IEEE 754 bit-boffins here? A popular way of generating a uniform random double is to something like this:
uint u = [32 random bits]
double rand = u / (double)uint.MaxValue;
However, double has 53 (52?) bits of precision, so you get a deficient rand that can't produce a continuum of possible values. I found some code which claims to convert 2 x 32 bit randoms into a fully random double, like this:
uint u1 = [32 random bits];
uint u2 = [32 random bits];
uint a = u1 >> 5, b = u2 >> 6;
return (a * 67108864.0 + b) * (1.0 / 9007199254740992.0);
Can anyone explain this magic? Is this correct?
Greg K