c# Denormalized Floating Point: is "zero literal 0.0f" slow?

354 views Asked by At

I just read about Denormalized floating point numbers, should i replace all zero literals with almost-zero literal to get better performance.

I am afraid that the evil zero constants in my could pollute my performance. Example:

Program 1:

float a = 0.0f;
Console.WriteLine(a);

Program 2:

float b = 1.401298E-45f;
Console.WriteLine(b);

Shouldn't program 2 be 1.000.000 times faster than program 1 since b can be represented by ieee floating point representation in cannonized form ? whereas program 1 has to act with "zero" which is not directly representable.

If so the whole software development industry is flawed. A simple field declaration:

float c;

Would automatically initialize it to zero, Which would cause the dreaded performance hit.

Avoid the hustle mentioning "Premature Optimization is the..., blablabla". Delayed Knowledge of Compilers Optimization Workings could result in the explosion of a nuclear factory. So i would like to know ahead what i am paying, so that i am safe to ignore optimizing it.

Ps. I don't care if float becomes denormalized by the result of a mathematical operation, i have no control in that, so i don't care.

Proof: x + 0.1f is 10 times faster than x + 0 Why does changing 0.1f to 0 slow down performance by 10x?

Question Synopsis: is 0.0f evil ? So all who used it as a constant are also evil?

1

There are 1 answers

1
Sneftel On BEST ANSWER

There's nothing special about denormals that makes them inherently slower than normalized floating point numbers. In fact, a FP system which only supported denormals would be plenty fast, because it would essentially only be doing integer operations.

The slowness comes from the relative difficulty of certain operations when performed on a mix of normals and denormals. Adding a normal to a denormal is much trickier than adding a normal to a normal, or adding a denormal to a denormal. The machinery of computation is simply more involved, requires more steps. Because most of the time you're only operating on normals, it makes sense to optimize for that common case, and drop into the slower and more generalized normal/denormal implementation only when that doesn't work.

The exception to denormals being unusual, of course, is 0.0, which is a denormal with a zero mantissa. Because 0 is the sort of thing one often finds and does operations on, and because an operation involving a 0 is trivial, those are handled as part of the fast common case.

I think you've misunderstood what's going on in the answer to the question you linked. The 0 isn't by itself making things slow: despite being technically a denormal, operations on it are fast. The denormals in question are the ones stored in the y array after a sufficient number of loop iterations. The advantage of the 0.1 over the 0 is that, in that particular code snippet, it prevents numbers from becoming nonzero denormals, not that it's faster to add 0.1 than 0.0 (it isn't).