the precision of printf with specifier "%g"

30.7k views Asked by At

Can anybody explain me how the [.precision] in printf works with specifier "%g"? I'm quite confused by the following output:

double value = 3122.55;
printf("%.16g\n", value); //output: 3122.55
printf("%.17g\n", value); //output: 3122.5500000000002

I've learned that %g uses the shortest representation.

But the following outputs still confuse me

printf("%.16e\n", value); //output: 3.1225500000000002e+03
printf("%.16f\n", value); //output: 3122.5500000000001819
printf("%.17e\n", value); //output: 3.12255000000000018e+03
printf("%.17f\n", value); //output: 3122.55000000000018190

My question is: why %.16g gives the exact number while %.17g can't?

It seems 16 significant digits can be accurate. Could anyone tell me the reason?

4

There are 4 answers

5
user35443 On

%g uses the shortest representation.

Floating-point numbers usually aren't stored as a number in base 10, but 2 (performance, size, practicality reasons). However, whatever the base of your representation, there will always be rational numbers that will not be expressible in some arbitrary size limit for the variable to store them.

When you specify %.16g, you're saying that you want the shortest representation of the number given with a maximum of 16 significant digits.

If the shortest representation has more than 16 digits, printf will shorten the number string by cutting cut the 2 digit at the very end, leaving you with 3122.550000000000, which is actually 3122.55 in the shortest form, explaining the result you obtained.

In general, %g will always give you the shortest result possible, meaning that if the sequence of digits representing your number can be shortened without any loss of precision, it will be done.

To further the example, when you use %.17g and the 17th decimal place contains a value different from 0 (2 in particular), you ended up with the full number 3122.5500000000002.

My question is: why %.16g gives the exact number while %.17g can't?

It's actually the %.17g which gives you the exact result, while %.16g gives you only a rounded approximate with an error (when compared to the value in memory).

If you want a more fixed precision, use %f or %F instead.

0
AudioBubble On

The exact representation in memory of the decimal 3122.55 is a binary fraction with a 53 bits mantissa.

printf("%a\n", value);     // output 0x1.865199999999ap+11

And the exact conversion back to a decimal is:

printf("%.45f\n", value);  // output 3122.550000000000181898940354585647583007812500000

If you cut that number at 17 digits you get:

printf("%.17g\n", value);  // output 3122.5500000000002

And at 16 digits, all the trailing digits are 0 and could safely be erased (which the g format does automatically by default) to get:

printf("%.16g\n", value);  // output 3122.55

That is why you get back the original decimal number.

0
Mark Amery On

The decimal value 3122.55 can't be exactly represented in binary floating point. When you write

double value = 3122.55;

you end up with the closest possible value that can be exactly represented. As it happens, that value is exactly 3122.5500000000001818989403545856475830078125.

That value to 16 significant figures is 3122.550000000000. To 17 significant figures, it's 3122.5500000000002. And so those are the representations that %.16g and %.17g give you.

Note that the nearest double representation of a decimal number is guaranteed to be accurate to at least 15 decimal significant figures. That's why you need to print to 16 or 17 digits to start seeing these apparent inaccuracies in your output in this case - to any smaller number of significant figures, the double representation is guaranteed to match the original decimal number that you typed.

One final note: you say that

I've learned that %g uses the shortest representation.

While this is a popular summary of how %g behaves, it's also wrong. See What precisely does the %g printf specifier mean? where I discuss this at length, and show an example of %g using scientific notation even though it's 4 characters longer than not using scientific notation would've been.

0
Clifford On

The decimal representation 3122.55 cannot be exactly represented by binary floating point representation.

A double precision binary floating point value can represent approximately 15 significant figures (note not decimal places) of a decimal value correctly; thereafter the digits may not be the same, and at the extremes do not even have any real meaning and will be an artefact of the conversion from the floating point representation to a string of decimal digits.

I've learned that %g uses the shortest representation.

The rule is:

Where P is the precision (or 6 if no precision specified or 1 if precision is zero), and X is the decimal exponent required for E/e style notation then:

  • if P > X ≥ −4, the conversion is with style f or F and precision P − 1 − X.
  • otherwise, the conversion is with style e or E and precision P − 1.

The modification of precision for %g results in the different output of:

printf("%.16g\n", value); //output: 3122.55
printf("%.16e\n", value); //output: 3.1225500000000002e+03
printf("%.16f\n", value); //output: 3122.5500000000001819

despite having the same precision in the format specifier.