The Wiki Double-precision floating-point format says:
This gives 15–17 significant decimal digits precision. If a decimal string with at most 15 significant digits is converted to IEEE 754 double precision representation and then converted back to a string with the same number of significant digits, then the final string should match the original. If an IEEE 754 double precision is converted to a decimal string with at least 17 significant digits and then converted back to double, then the final number must match the original.
Can anybody give me some example to show how the conversion match the original, and in which cases it doesn't?
With 15 significant digits, from string to double and back...
Note: for some 15 significant-digit initial input strings, the above code may output a different final string. Here's a program that attempts to find a 15 digit string input for which the value isn't preserved by conversion to and from a
double
, but all values pass for GCC on coliru.stackedcrooked.com. That doesn't mean it wouldn't fail for some other values in a different range.With 17 significant digits, from double to string and back...
This should never fail to recover an identical
double
value from the textual representation.