Numpy - Difference between two floats vs float type's precision

980 views Asked by At

I was looking at numpy.finfo and did the following:

In [14]: np.finfo(np.float16).resolution
Out[14]: 0.0010004
In [16]: np.array([0., 0.0001], dtype=np.float16)
Out[16]: array([ 0.        ,  0.00010002], dtype=float16)

It seems that the vector is able to store two numbers such that their difference is 10 times smaller than the type's resolution. Am I missing something?

2

There are 2 answers

0
mwiebe On BEST ANSWER

Floating point numbers have a fixed amount of resolution after the initial digit. What this number is telling you is, when the first digit is at the 1.0 position, what is the resolution of the number. You can see this by trying to add smaller amounts to 1.0:

In [8]: np.float16(1) + np.float16(0.001)
Out[8]: 1.001

In [9]: np.float16(1) + np.float16(0.0001)
Out[9]: 1.0

This is related to the nextafter function, which gives the next representable number after the given one. Taking that difference gives approximately this resolution:

In [10]: np.nextafter(np.float16(1), np.float16(np.inf)) - np.float16(1)
Out[10]: 0.00097656
0
asimoneau On

From what I understand, the precision is the amount of decimals you can have. But since floats are stored whith exponants, you can have a number smaller than the resolution. try np.finfo(np.float16).tiny, it should give you 6.1035e-05, which is way smaller than the resolution. But the base part of that number has a resolution of ~0.001. Note that all of the limits in finfo are approximated because the binary representation is not directly correlated to an exact decimal limit.