Summation of these numbers give different results in .NET Core / C# and on other compilers.
3987908.698692091 + 92933945.11382028 + 208218.11919727124 + 61185833.06829034
.NET Core / C# : 158315904.99999997
Others: 158315905
Clearly the .NET Core / C# one is deviated.
Here is the code in C#:
double[] no = {
3987908.698692091,
92933945.11382028,
208218.11919727124,
61185833.06829034
};
Console.WriteLine("{0}", no.Sum());
Here is the code in C++
vector<double> no = {
3987908.698692091,
92933945.11382028,
208218.11919727124,
61185833.06829034
};
cout << setprecision(16);
cout << "sum: " << sum(no) << " \n";
double sum(vector<double> &fa)
{
double sum = 0.0;
for(double f : fa)
sum = sum + f;
return sum;
}
PS: Using decimal also gives the same outcome. I believe mono C# compiler might give the same result as the C++ one.
Is there a way to fix the deviation problem either with compiler options or somehow inside C#?
This is only affecting the formatted output - the underlying numbers have not changed.
This was changed with .NET Core 3.0
If you change the output to:
the output will be the same for both .NET 4.8 and .NET Core 3.0 and later.
The change to the formatting that you're seeing was made to implement IEEE 754-2008 correctly.
In particular, this requirement mandated the change:
Obviously the default round-trip conversion wouldn't work for .NET Framework because a value of
158315904.99999997was converted to a string"158315905"which, when converted back to a binary floating point number, would differ from the original.