How does one calculate how small and how large a physical measurement can be such that the application will not incur over/underflow in representing the measurement as a double?
E.g. I take a few measurements of distance to a flat surface and I want to fit a plane to the data set. I want to figure out how close and how far away I can be from the surface in taking those measurements such that the results of the application are correct.
In my program, I'm reading the measurements into 3-tuples of double type, to represent points in R3. Desired precision is 2 or 3 decimal places.
Not sure where to start. . .
EDIT: I'm not trying to catch overflow; I'm trying to analyze for limits of the application.
A double precision floating point number has about 15 significant digits and the magnitude can be between
1e-308
to1e308
.Let's say that the distance from the base of your measurement to the surface is estimated to be about
x
units of length, and that the surface roughness, measurement error, and any other uncertainty is abouta
in the same units of length. (The choice of units is upto you.) Here it will be natural to suppose thatx
is relatively larger thana
.At least the following restrictions need to be satisfied.
x
is smaller than about1e308
.a
is larger than about1e-308
.a/x
is larger than1e-15
.I think you could store the raw data of measurements about the distance from the moon to the earth (
3e8
m) to resolve a hair (1e-5
m) in double precision floating point numbers in the units of meter.If you are detecting a smooth surface on the moon from the earth at the resolution of one atomic layer, maybe 15 significant digits of a double precision floating point number will not be enough to faithfully represent the measurement results. However, if your measurement apparatus has more significant digits natively, you can store the average (or typical) distance
d
in one floating point number, and store the differencex-d
of the measured distance from the average in other floating point numbers for the respective measurements.