I am trying to calculate the time in buffer
in microseconds.
But I don't understand why the floating-point operation result of my code is not correct.
float time, sec;
int h, m;
sscanf(16:41:48.757996, "%d:%d:%f", &h, &m, &sec);
printf("buffer %s\n",buffer);
printf("hour %d\n",h);
printf("minute %d\n",m);
printf("seconde %f\n",sec);
time=3600*h+60*m;+sec;
printf("%f\n",time);
When I execute this code, I get the following result:
buffer 16:41:48.757996
heure 16
minute 41
seconde 48.757996
60108.757812
But I am expecting:
buffer 16:41:48.757996
heure 16
minute 41
seconde 48.757996
60108.757996
The result of the floating-point operation is not correct.
According to IEEE 754 encoding, many numbers will have small changes to allow them to be stored.Also, the number of significant digits can change slightly since it is a binary representation, not a decimal one.
Single precision (float) gives you 23 bits of significand, 8 bits of exponent, and 1 sign bit.
Double precision (double) gives you 52 bits of significand, 11 bits of exponent, and 1 sign bit.
Following code snippet will work for you