How do I detect total loss of precision with Doubles?

716 views Asked by At

When I run the following code, I get 0 printed on both lines:

Double a = 9.88131291682493E-324;
Double b = a*0.1D;
Console.WriteLine(b);
Console.WriteLine(BitConverter.DoubleToInt64Bits(b));

I would expect to get Double.NaN if an operation result gets out of range. Instead I get 0. It looks that to be able to detect when this happens I have to check:

  • Before the operation check if any of the operands is zero
  • After the operation, if neither of operands were zero, check if the result is zero. If not let it run. If it is zero, assign Double.NaN to it instead to indicate that it's not really a zero, it's just a result that can't be represented within this variable.

That's rather unwieldy. Is there a better way? What Double.NaN is designed for? I'm assuming some operations must have return it, surely designers did not put it there just in case? Is it possible that this is a bug in BCL? (I know unlikely, but, that's why I'd like to understand how that Double.NaN is supposed to work)

Update

By the way, this problem is not specific for double. decimal exposes it all the same:

Decimal a = 0.0000000000000000000000000001m;
Decimal b =  a* 0.1m;
Console.WriteLine(b);

That also gives zero.

In my case I need double, because I need the range they provide (I'm working on probabilistic calculations) and I'm not that worried about precision.

What I need though is to be able to detect when my results stop mean anything, that is when calculations drop the value so low, that it can no longer be presented by double.

Is there a practical way of detecting this?

2

There are 2 answers

4
DrKoch On BEST ANSWER

All you need is epsilon.

This is a "small number" which is small enough so you're no longer interested in.

You could use:

double epsilon = 1E-50;

and whenever one of your factors gets smaller than epislon you take action (for example treat it like 0.0)

24
Luaan On

Double works exactly according to the floating point numbers specification, IEEE 754. So no, it's not an error in BCL - it's just the way IEEE 754 floating points work.

The reason, of course, is that it's not what floats are designed for at all. Instead, you might want to use decimal, which is a precise decimal number, unlike float/double.

There's a few special values in floating point numbers, with different meanings:

  • Infinity - e.g. 1f / 0f.
  • -Infinity - e.g. -1f / 0f.
  • NaN - e.g. 0f / 0f or Math.Sqrt(-1)

However, as the commenters below noted, while decimal does in fact check for overflows, coming too close to zero is not considered an overflow, just like with floating point numbers. So if you really need to check for this, you will have to make your own * and / methods. With decimal numbers, you shouldn't really care, though.

If you need this kind of precision for multiplication and division (that is, you want your divisions to be reversible by multiplication), you should probably use rational numbers instead - two integers (big integers if necessary). And use a checked context - that will produce an exception on overflow.

IEEE 754 in fact does handle underflow. There's two problems:

  • The return value is 0 (or -1 for negative undreflow). The exception flag for underflow is set, but there's no way to get that in .NET.
  • This only occurs for the loss of precision when you get too close to zero. But you lost most of your precision way long before that. Whatever "precise" number you had is long gone - the operations are not reversible, and they are not precise.

So if you really do care about reversibility etc., stick to rational numbers. Neither decimal nor double will work, C# or not. If you're not that precise, you shouldn't care about underflows anyway - just pick the lowest reasonable number, and declare anything under that as "invalid"; may sure you're far away from the actual maximum precision - double.Epsilon will not help, obviously.