Why is 5 != 5.0000000000000000001 False, but 5 != 5.00000001 True?

409 views Asked by At

In python 3.5, I was just playing around with comparison operators and came across this (seeming) oddity.

Is there a threshold of number of zeros after the decimal point past which the interpreter decides to not consider values relevant because of the inherent inaccuracy of floating point values?

1

There are 1 answers

5
Dimitris Fasarakis Hilliard On BEST ANSWER

In short, during the phase were parsing of input is done, Python needs to transform your input to a C double which can then be transformed to a Python float. Inputs with more than 16 decimal digits are going to get approximated, with 5.0000000000000001 getting approximated to 5.0:

>>> 5.0000000000000001 
5.0

As a result the comparison 5 == 5.0000000000000001 is going to succeed (5 is going to get transformed to a Python float equal to 5.0 in order for the comparison to take place).

For digits less than the aforementioned, the result (can be represented) and speaks for itself:

>>> 5.000000000000001
5.000000000000001

Yes, float_richcompare has --unfortunately-- nothing to do with this behavior as I thought in my original comment on the question. It all happens before it gets invoked.