I am trying to compare the floating point numbers on Android, Tensorflow and Pytorch. What I have observed is I am getting same result for Tensorflow and Android but different on Pytorch as Android and Tensorflow are performing round-down operation. Please see the following result:
TensorFlow
import tensorflow as tf
a=tf.convert_to_tensor(np.array([0.9764764, 0.79078835, 0.93181187]), dtype=tf.float32)
session = tf.Session()
result = session.run(a*a*a*a)
print(result)
PyTorch
import torch as th
th.set_printoptions(precision=8)
a=th.from_numpy(np.array([0.9764764, 0.79078835, 0.93181187])).type(th.FloatTensor)
result = a*a*a*a
print(result)
Android:
for (index in 0 until a.size) {
var res = a[index] * a[index] * a[index] * a[index]
result.add(res)
}
print("r=$result")
The result is as follows:
Android: [0.9091739, 0.3910579, 0.7538986]
TensorFlow: [0.9091739, 0.3910579, 0.7538986]
PyTorch: [0.90917391, 0.39105791, 0.75389862]
You can see that PyTorch value is different. I know that this effect is minimal in this example but when we are performing training, and we are running for 1000 rounds with different batches and epochs, this different can be accumulated and can show un-desirable results. Can anyone point out how can we fix to have same number on three platforms.
Thanks.
You are not using the same level of precision when printing, hence why you get different results. Internally, those results are identical, it's just an artifact that you see due do the default of python to print only 7 digits after the comma.
If we set the same level of precision in numpy as the one you set in PyTorch, we get:
Results in:
Exactly the same as in PyTorch.