I'm using Visual C++ 2008 Express Edition and when i debug code:
double x = 0.2;
I see in debugging tooltip on x 0.20000000000000001
but:
typedef numeric_limits< double > double_limit;
int a = double_limit::digits10
gives me: a = 15
Why results in debugger are longer than normal c++ precision? What is this strange precision based on?
My CPU is Intel Core 2 Duo T7100
What you are seeing is caused by the fact that real numbers (read floating-point) cannot be expressed with perfect precision and accuracy in binary computers. This is a fact of life. Instead, computers approximate the value and store it in memory in a defined format.
In the case of most modern machines (including any machine your'e running MSVC Express on), this format is IEEE 754.
Long story short, this is how real numbers are stored in IEEE 754: there is one sign bit, 8 exponent bits and 23 fraction bits (for
float
data type --doubles
use more bits accordingly but the format is the same). Because of this, you can never achieve perfect precision and accuracy. Fortunately you can achieve plenty of accuracy and precision for almost any application including critical financial systems and scientific systems.You don't need to know everything there is to know about IEEE754 in order to be able to use floating-points in your code. But there are a few things you must know:
1) You can never compare 2 floating point values for equality because of the rounding error inherent in floating point calulation & storage. Instead, you must do something like this:
2) Rounding errors compound. The more times you perform operations on a floating point value, the greater the loss of precision.
3) You cannot add two floating points of vastly different magnitude. For example, you can't add 1.5x10^30 and 1.5x10^-30 and expect 60 digits of precision.