Can anybody explain me how the [.precision]
in printf works with specifier "%g"? I'm quite confused by the following output:
double value = 3122.55;
printf("%.16g\n", value); //output: 3122.55
printf("%.17g\n", value); //output: 3122.5500000000002
I've learned that %g
uses the shortest representation.
But the following outputs still confuse me
printf("%.16e\n", value); //output: 3.1225500000000002e+03
printf("%.16f\n", value); //output: 3122.5500000000001819
printf("%.17e\n", value); //output: 3.12255000000000018e+03
printf("%.17f\n", value); //output: 3122.55000000000018190
My question is: why %.16g
gives the exact number while %.17g
can't?
It seems 16 significant digits can be accurate. Could anyone tell me the reason?
%g
uses the shortest representation.Floating-point numbers usually aren't stored as a number in base
10
, but2
(performance, size, practicality reasons). However, whatever the base of your representation, there will always be rational numbers that will not be expressible in some arbitrary size limit for the variable to store them.When you specify
%.16g
, you're saying that you want the shortest representation of the number given with a maximum of16
significant digits.If the shortest representation has more than
16
digits,printf
will shorten the number string by cutting cut the2
digit at the very end, leaving you with3122.550000000000
, which is actually3122.55
in the shortest form, explaining the result you obtained.In general,
%g
will always give you the shortest result possible, meaning that if the sequence of digits representing your number can be shortened without any loss of precision, it will be done.To further the example, when you use
%.17g
and the17
th decimal place contains a value different from0
(2
in particular), you ended up with the full number3122.5500000000002
.It's actually the
%.17g
which gives you the exact result, while%.16g
gives you only a rounded approximate with an error (when compared to the value in memory).If you want a more fixed precision, use
%f
or%F
instead.