Can someone please explain why:
double d = 1.0e+300;
printf("%d\n", d == 1.0e+300);
Prints "1" as expected on a 64-bit machine, but "0" on a 32-bit machine? (I got this using GCC 6.3 on Fedora 25)
To my best knowledge, floating point literals are of type double
and there is no type conversion happening.
Update: This only occurs when using the -std=c99
flag.
The C standard allows to silently propagate floating-point constant to
long double
precision in some expressions (notice: precision, not the type). The corresponding macro isFLT_EVAL_METHOD
, defined in<float.h>
since C99.As by C11 (N1570), ยง5.2.4.2.2, the semantic of value
2
is:From the technical viewpoint, on x86 architecture (32-bit) GCC compiles the given code into FPU instructions using x87 with 80-bit stack registers, while for x86-64 architecture (64-bit) it preffers SSE unit (as scalars within XMM registers).
The current implementation was introduced in GCC 4.5 along with
-fexcess-precision=standard
option. From the GCC 4.5 release notes: