In the XC16 compiler's DSP routines header (dsp.h) there are these lines:
/* Some constants. */
#ifndef PI /* [ */
#define PI 3.1415926535897931159979634685441851615905761718750 /* double */
#endif /* ] */
#ifndef SIN_PI_Q /* [ */
#define SIN_PI_Q 0.7071067811865474617150084668537601828575134277343750
/* sin(PI/4), (double) */
#endif /* ] */
But, the value of PI is actually (to the same number of decimal places) is:
3.1415926535897932384626433832795028841971693993751
The dsp.h-defined value starts to diverge at the 16th decimal place. For double floating point operations, this is borderline significant. For Q15 representations, this is not significant. The value of sin(pi/4) also diverges from the correct value at the 16th decimal place.
Why is Microchip using the incorrect value? Is there some esoteric reason related to computing trig function values, or is this simply a mistake? Or maybe it does not matter?
It turns out that both:
and
when converted to double (64bit float) are represented by the same binary number:
So it makes no difference in this case.
As for why they use a different number? Not all algorithms that generates PI produces it digit-by-digit. Some produce a series of numbers that merely converges to pi instead of producing one digit at a time. A good example of this is fractional values of PI. 22/7, 179/57, 245/78, 355/113 etc. all get closer and closer to PI but they don't do it digit-by-digit. Similarly, the polygon approximation method that's popular because they can be easily calculated by computer programs can calculate successive numbers that get closer and closer to PI but don't do it digit by digit.