I'm developing a time critical algorithm in Java and therefore am not using BigDecimal
. To handle the rounding errors, I set an upper error bound instead, below which different floating point numbers are considered to be exactly the same. Now the problem is what should that bound be? Or in other words, what's the biggest possible rounding error that can occur, when performing computational operations with floating-point numbers (floating-point addition, subtraction, multiplication and division)?
With an experiment I've done, it seems that a bound of 1e-11
is enough.
PS: This problem is language independent.
EDIT: I'm using double
data type. The numbers are generated with Random
's nextDouble()
method.
EDIT 2: It seems I need to calculate the error based on how the floating-point numbers I'm using are generated. The nextDouble()
method looks like this:
public double nextDouble() {
return (((long)(next(26)) << 27) + next(27))
/ (double)(1L << 53); }
Based on the constants in this method, I should be able calculate the the biggest possible error that can occur for floating-point number generated with this method specifically (its machine epsilon?). Would be glad if someone could post the calculation .
This depends on:
For example, consider the function
f(x) = a * ( b - ( c+ d))
No big deal, or is it?It turns out it is when d << c, b = c and a whatever, but let's just say it's big.
Let's say:
This is totally made up, but you get the point. The point is, the difference of magnitude between c and d mean that
Long story short, some operations (notably subtractions) (can) kill you. Also, it is not so much the generating function that you need to look at, it is the operations that you do with the numbers (your algorithm).