I am using Eigen c++ library for linear algebra operations.
There is a variable v
in my code that is a VectorXd
type, and I need to calculate its sum, so I called v.sum()
.
However, when I updated my program to a new version, although the value of v
remain same(read from the same input file), the sum()
function give slightly
different value.
Here is a piece of code that explains my problem:
double vsum1 = v.sum();
double vsum2 = 0; // compare with manually calculated sum
for(size_t i = 0; i < v.size(); ++i)
{
vsum2 += v(i);
}
cout << "sum1: " << vsum1 << endl;
cout << "sum2: " << vsum2 << endl;
for the old version, the result is
sum1: 94.8117866666666487
sum2: 94.8117866666666202
for the new version , the result is
sum1: 94.8117866666666345
sum2: 94.8117866666666202
The manually calculated sum vsum2
remains unchanged, so I think the origin vector v
didn't change, then why would sum()
give different result? Is it because of
some SIMD optimization performed by Eigen?
The difference is actually neligible, but that leads to a failure of regression test.
5gon12eder's comment is right. Eigen3.3 perform AVX vctorization if available (4 double at once) compared to SSE only in Eigen3.2 (2 double at once). In any case, you must use some tolerance when comparing floating-point numbers to account for round-off errors. You can take inspiration from Eigen's unit tests.