Why we define eps= 1 and divide by 2 when writing a program to determine "machine precision" in python

300 views Asked by At

So I already have my program to determine my "machine precision" or my epsilon:

    epsilon = 1
while 1 + epsilon != 1:
    epsilon_2=epsilon
    epsilon /= 2
print(epsilon_2)

and I already confirmed that it is right. My problem is that I don't understand why we set epsilon = 2 on line 1 and why we divide by two on line 4. I tested with different values for both epsilon and the number I use to divide and I got different results almost always.

1

There are 1 answers

2
Prune On BEST ANSWER

This is to figure out how many bits of precision your machine handles. It presumes binary representation. In this code, epsilon will take on the binary values 1, 0.1, 0.01, 0.001, ... These values will be stored in a form roughly equivalent to 0.1 * 2**N, where N is the appropriate integer. The mantissa is always 0.1; the characteristic simply shifts by 1 on each loop

epsilon = 1
while 1 + epsilon != 1:

This addition forces the execution unit (EU) to normalize the value of epsilon to a characteristic (power of 2) of 0. Otherwise, standard float notation will store the number in exponential notation, simply changing the characteristic to normalize the mantissa (value of the number) to the largest fraction < 1.0.

The whiel condition checks whether the normalization has caused us to lose precision: did epsilon become effectively 0, given the machine's precision. If not ...

    epsilon_2=epsilon
    epsilon /= 2

Remember the most recent value for which there was a difference. Shift epsilon one bit to the right (in pencil & paper terms; in computer terms this merely reduces the characteristic by 1) and repeat the loop.

print(epsilon_2)

When you finally reach a point where the normalized epsilon is 0.0, causing the arithmetic to break down, print the previous value: the one where the arithmetic still succeeded.

Does that clear it up?