I have been trying to calibrate my camera recently. To check the accuracy of the calibration, I was told that I should be checking reprojection errors. The problem is I could not exactly find out how to calculate the reprojection errors.
For this reason, I, first, reprojected the points with the camera parameters I found, and printed those side by side with the initial twodpoints that were detected in the calibration pattern. Since, I did not know the formula; I only visually wanted to see the differences in pixels. I saw 1-2 pixel differences for each of the corresponding entries of the two matrices. Then, I saw the reprojection error calculation in the OpenCV documentation, and implemented it in my code. There comes the strange part; reprojection error turned out to be (0.1676704). L2 norm was used to calculate reprojection error in the below code. If my pixels are 1-2 pixel of the ground truth value, how can my reprojection error be around 0.16? I do think the error is divided once more than it was meant to. In the line 3, norm difference between twodpoints[i] and imgpoints2 is calculated and divided by total number of points. At this point, I thought that was like a root-mean-square approach; thus, explanation for division is taking the mean. However, after summing mean errors at line 4, that sum is again divided by number of points. I did not understand why? Does this code correctly calculate reprojection error?
for i in range(len(threedpoints)):
imgpoints2, _ = cv2.projectPoints(threedpoints[i], r_vecs[i], t_vecs[i], mtx, dist) error = cv2.norm(twodpoints[i], imgpoints2, cv2.NORM_L2)/len(imgpoints2)
mean_error += error
print( "total error: {}".format(mean_error/len(threedpoints)) )
If you'd like to check, it is provided here as well: https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html