According to http://msdn.microsoft.com/en-us/library/system.dividebyzeroexception.aspx only Int and Decimal will throw the DivideByZeroException when you divide them by 0, but when you divide floating point by 0, the result is infinity, negative infinity, or NaN. Why is this? And what are some examples where the result is +ve infinity, -ve infinity, or NaN?
Why do int and decimal throw DivideByZeroException but floating point doesn't?
1.1k views Asked by user2261573 AtThere are 3 answers
The floating point engine built into the processor is quite capable of generating exceptions for float division by zero. Windows has a dedicated exception code for it, STATUS_FLOAT_DIVIDE_BY_ZERO, exception code 0xC000008E, "Floating-point division by zero". As well as other mishaps that the FPU can report, like overflow, underflow and inexact results (aka denormals).
Whether it does this is determined by the control register, programs can alter this register with a helper function like _controlfp(). Libraries created with Borland tools routinely did this for example, unmasking these exceptions.
This has not worked out well, to put it mildly. It is the worst possible global variable you can imagine. Mixing such libraries with other ones that expect a division by zero to generate infinity instead of an exception just does not work and is next-to-impossible to deal with.
Accordingly, it is now the norm for language runtimes to mask all floating point exceptions. The CLR insist on this as well.
Dealing with a library that unmasks the exceptions is tricky, but has a silly workaround. You can throw an exception and catch it again. The exception handling code inside the CLR resets the control register. An example of this is shown in this answer.
Michael's answer is of course correct. Here's another way to look at it.
Integers are exact. When you divide seven by three in integers you are actually asking the question "how many times can I subtract three from seven before I'd have to go into negative numbers?". Division by zero is undefined because there is no number of times you can subtract zero from seven to get something negative.
Floats are by their nature inexact. They have a certain amount of precision and you are best to assume that the "real" quantity is somewhere between the given float and a float near it. Moreover, floats usually represent physical quantities, and those have measurement error far larger than the representation error. I think of a float as a fuzzy smeared-out region surrounding a point.
So when you divide seven by zero in floats, think of it as dividing some number reasonably close to seven by some number reasonably close to zero. Clearly a number reasonably close to zero can make the quotient arbitrarily large! And therefore this is signaled to you by giving infinity as the answer; this means that the answer could be arbitrarily large, depending on where the true value actually lies.
The IEEE standards committee felt that exception handling was more trouble than it was worth, for the range of code that could encounter these kinds of issues with floating-point math:
This may seem strange to a developer accustomed to a language in which exception handling is deeply baked, like C#. The developers of the IEEE 754 standard are thinking about a broader range of implementations (embedded systems, for instance), where such facilities aren't available, or aren't desirable.