Does Python not follow IEEE-754 in case of division by zero?

187 views Asked by At

Here's a toy example in case of which Python exhibits behaviour that surprised me:

def harmonic_mean(x, y):
    return 2/(1/x+1/y)

print(harmonic_mean(0.0, 1.0))

In IEEE-754 division by 0.0 shouldn't halt execution but rather it should just produce inf (or -inf or NaN). Because of that, the above code should work even if one of the arguments is 0.0. However, it seems to instead halt and print ZeroDivisionError: float division by zero. Does that mean that Python doesn't follow IEEE-754 standard even on a platform that supports it? What's the motivation for making dividing by 0.0 cause a runtime error instead of producing a result that the standard prescribes? Do I have a way around that other than explicitly checking for zeros? I know that in this particular example I could instead e.g. use the following:

def harmonic_mean(x, y):
    return 2*x*y/(x+y)

However, I would like to know what I can do more generally.

EDIT: I'll add that one reason I was surprised was that I had already seen NumPy behave mostly as I would have expected in a case of divide by zero. As an example, consider something like np.array([-1.0, 0.0, 1.0])/0.0. The divide by zero causes a warning that reads RuntimeWarning: divide by zero encountered in divide to be printed in the command line but the result is still [-inf nan inf] as I would've expected.

4

There are 4 answers

1
Sneftel On BEST ANSWER

From the IEEE 754-2019 standard:

exception: An event that occurs when an operation on some particular operands has no outcome suitable for every reasonable application. That operation might signal an exception by invoking default exception handling or alternate exception handling. Exception handling might signal further exceptions. Recognize that event, exception, and signal are defined in diverse ways in different programming environments.

In other words, while continuing execution as normal with an Inf or NaN result would be a compliant response to attempted division by zero, so would a non-local return, as is the case with Python.

The motivation for this is the assumption that a division of this sort is probably accidental, and that (all things considered) it would be better to immediately signal an error than to allow the result to cause more subtle errors later on. If this doesn't fit your needs, you can use numpy for different semantics.

From The Zen of Python:

Errors should never pass silently.
Unless explicitly silenced.

To that point, '754 also recommends that implementations allow the user to modify the semantics of exception handling, such as to pass without error, log the error elsewhere, and/or substitute some arbitrary result. But this is not a requirement of the '754 standard: It uses the word "should" which it defines elsewhere as indicating that it is "particularly suitable" or "preferred but not necessarily required".

10
mkrieger1 On

This is discussed for example here or here.

According to those discussions, Python does follow IEEE-754 by signalling overflows or singularities by raising an exception.

0
Blue Spider On

Dividing a non-zero number by zero should yield infinity (inf or -inf, depending on the sign), and 0.0 / 0.0 should produce NaN (Not a Number).

However, Python raises a ZeroDivisionError exception in certain cases to alert the programmer to a potential problem rather than silently returning inf, -inf, or NaN.

This is a design choice and it does not indicate non-compliance with IEE-754, but rather a preference.

In practice, Python Does follow IEEE-754 when it comes to the actual representation and behavior of floating-point numbers, including the handling of special cases like infinity and NaN.

positive_infinity = float('inf')
negative_infinity = float('-inf')

print(positive_infinity)  # Output: inf
print(negative_infinity)  # Output: -inf
print(0.0 / 0.0)  # Output: nan

print(positive_infinity + 1)  # Output: inf
print(positive_infinity * -1)  # Output: -inf

print(positive_infinity > 1000000)  # Output: True
print(negative_infinity < -1000000)  # Output: True

print(positive_infinity - positive_infinity)  # Output: nan
print(0.0 * positive_infinity)  # Output: nan
3
Tim Peters On

The newer decimal module can be configured to conform with the later base-independent generalizations of 754, but Python float semantics cannot be.

Python 3.12.2 (tags/v3.12.2:6abddd9, ...
...
>>> import decimal
>>> decimal.Decimal(1) / 0 # by defaalt, this also raises
Traceback (most recent call last):
  ...
decimal.DivisionByZero: [<class 'decimal.DivisionByZero'>]
>>> decimal.getcontext().traps[decimal.DivisionByZero] = False
>>> decimal.Decimal(1) / 0 # but, as above, can be turned off
Decimal('Infinity')

Python predates the introduction of 754, and backward compatibility in this area has always been considered more important. Core Python doesn't claim to conform to any floating-point standard (although the decimal module does).

In much earlier days, some people introduced layers of cruft to try to make this optional, but, at the time, there was such massive variation among the many C environments in use (CPython is coded in C, and inherits a lot from platform C semantics) that people gave up.

It would probably be easier to do today, but it seems nobody cares enough to volunteer the effort.