# C float initialization result is unexpected

Asked by At

Visual Studio 2019 Community float variable is not what I initialize it to. I made it less precise in hopes of avoiding truncation. Is this not "controllable"?

tried to force init to round in other ways

``````int main()
{
// where did my 6 go?
float b = 80.123456F;
printf("b:[%f]\n", b);

// rounds up
float c = 80.1234569F;
printf("c:[%f]\n", c);

// rounds up
float d = 80.1234561F;
printf("d:[%f]\n", d);

// appends a 1
float e = 80.12345F;
printf("e:[%f]\n", e);

// prints what's initialized
float f = 80.123451F;
printf("f:[%f]\n", f);

// prints what's initialized
float g = 80.123459F;
printf("g:[%f]\n", g);
}
``````
``````b:[80.123459]
c:[80.123459]
d:[80.123459]
e:[80.123451]
f:[80.123451]
g:[80.123459]
``````

I had started with the debugger and wrote this to meet SO reqs. In my case, the debugger matched the prints

## 3 Answers On Best Solutions

Floats are only accurate to at most 7 significant digits. Not 7 decimal digits. 80.123456F has 8 significant digits, so that isn't going to work.

While 80.12345 has only 7 significant digits, it actually ends up at around 80.1234512329101562. Which, as long as you're not trying to print more digits than it has, is OK. Unfortunately, %f always prints 6 digits after the decimal point. Which in this case, is one more than makes sense. You'd do slightly better with %g where you can specifiy number of significant digits instead. On

Two issues:

• `printf` with the `%f` conversion specifier does its own rounding of floating-point values for display. You can control how many digits following the decimal point are displayed by using a precision specifier: `printf( "%.8f\n", b );` will print up to 8 digits following the decimal point, and `printf( "%.16f\n", a);` will print 16. Don't trust that displayed value, though, because...

• An infinite number of real values cannot fit into a finite number of bits, so most real values cannot be represented exactly - you get an approximation that's only good out to so many decimal digits. Single-precision `float` types don't guarantee much more than 6 to 7 decimal digits of precision (that's not just the number of digits following the decimal point, that's total number of digits). `double` gives you something like 10 to 11 digits. If you need to represent values with many digits of precision (more than 12 or so), then you need go past native floating point types and use a third-party multiple precision library. On

The solution to this problem may be to change data types. Even though `double` operates in the same way as `float`, approximating a large range of real numbers by a finite set of rationals, doubles are so close together that the rounding error is less likely to matter.

For example, the `float` values closest to the real number 80.123456 are 80.12345123291015625 and 80.1234588623046875. The `double` values closest to 80.123456 are 80.123456000000004451067070476710796356201171875 and 80.1234559999999902402123552747070789337158203125.

`float` does have its uses, in situations in which you are dealing with very large sets of very low precision numbers, but `double` is the more generally useful type. When getting a physical quantity into a program using `double`, the measurement error will exceed the rounding error.