I might have missed it earlier Greg but was the actual problem that this helps with? I was intrigued by the underlying problem.

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile

SQL Down Under | Web: https://sqldownunder.com | About Greg:  https://about.me/greg.low

 

From: Greg Keogh via ozdotnet <ozdotnet@ozdotnet.com>
Sent: Tuesday, 5 July 2022 8:37 AM
To: ozDotNet <ozdotnet@ozdotnet.com>
Cc: Greg Keogh <gfkeogh@gmail.com>
Subject: Re: 53-bit double

 

uint u1 = [32 random bits];
uint u2 = [32 random bits];
uint a = u1 >> 5, b = u2 >> 6;
return (a * 67108864.0 + b) * (1.0 / 9007199254740992.0);

 

Can anyone explain this magic? Is this correct?

 

This bit trick is no longer important to decode. I was trying to figure out how to convert a UInt64 of random bits into a normalised double in the range [0,1) in such a way that all 2^53 possible floating values could be produced. Some web sites suggested simply doing this:

 

uint u = [64 random bits]

double d = (u >> 11) * 1.11022302462515654e-16   // * 2-53

 

I was suspicious that subtle rounding problems might produce a defective output range, but a quick experiment in LINQPad that fed limit values into the calculation demonstrated that a complete set of significand bit patterns was generated at the limits. This supplies good evidence that the above shift-and-multiply converts 53 of the 64 random bits into a complete possible double range.

 

Greg K