I would like to understand how uniform random distribution works at the bit level.
For example, in fortran random_number gives an uniform distribution between [0,1). Real numbers have a mantisse and an exponent. So, I wonder if all possible numbers (at the bit level) are obtained. And in this case, if I consider number at the bit level, they won't have the same probability to to be chosen. Or another solution, not all numbers are used and numbers have the same interval : The largest interval between two numbers (ie exponent = 0, all mantisse bits=1 - all mantisse bits but last = 1 and last =0) is used.
Is there some links to explain that ?
uniform random distribution at the bit level
172 views Asked by Stef1611 At
1
In principle, it's easy.
A uniform random variable in (0, 1) is distributed as:
b0/2 + b1/4 + b2/8 + ...,
Where bi are unbiased random bits (zeros and ones).
This is a very old insight, dating at least from von Neumann (1951, "Various techniques used in connection with random digits").
Thus, in principle, all that's needed is to generate a steady sequence of unbiased random bits.
But generating a "uniform" floating-point number in the interval (0, 1) is non-trivial by comparison. See the following, for example:
Random floating point double in Inclusive Range
To respond to your comment:
In theory, a uniform distribution on (0, 1) is the same as one on [0, 1), (0, 1], or [0, 1]: the values 0 and 1 occur with probability zero, as is any particular number on (0, 1). However, a "uniform" floating-point number on (0, 1) is not the same as one on [0, 1), (0, 1], or [0, 1], since zero and 1 may occur with positive probability depending on whether the interval contains 0 or 1, respectively. In effect, "throwing away" zeros and ones on a "uniform" floating-point number on [0, 1] is the best that can be done to get a "uniform" floating-point number on (0, 1).