I'm having trouble understanding C's rules for what precision to assume when printing doubles, or when converting strings to doubles. The following program should illustrate my point:
#include <errno.h>
#include <float.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char **argv) {
double x, y;
const char *s = "1e-310";
/* Should print zero */
x = DBL_MIN/100.;
printf("DBL_MIN = %e, x = %e\n", DBL_MIN, x);
/* Trying to read in floating point number smaller than DBL_MIN gives an error */
y = strtod(s, NULL);
if(errno != 0)
printf(" Error converting '%s': %s\n", s, strerror(errno));
printf("y = %e\n", y);
return 0;
}
The output I get when I compile and run this program (on a Core 2 Duo with gcc 4.5.2) is:
DBL_MIN = 2.225074e-308, x = 2.225074e-310
Error converting '1e-310': Numerical result out of range
y = 1.000000e-310
My questions are:
- Why is x printed as a nonzero number? I know compilers sometimes promote doubles to higher precision types for the purposes of computation, but shouldn't printf treat x as a 64-bit double?
- If the C library is secretly using extended precision floating point numbers, why does strtod set errno when trying to convert these small numbers? And why does it produce the correct result anyway?
- Is this behavior just a bug, a result of my particular hardware and development environment? (Unfortunately I'm not able to test on other platforms at the moment.)
Thanks for any help you can give. I will try to clarify the issue as I get feedback.
Because of the existence of denormal numbers in the IEEE-754 standard.
DBL_MIN
is the smallest normalised value.Because the standard says so (C99 7.20.1.3):
Returning the "correct" value (i.e. 1e-310) obeys the above constraint.
So not a bug. This is technically platform-dependent, because the C standard(s) place no requirements on the existence or behaviour of denormal numbers (AFAIK).