I want to estimate the pose of the robot (camera) through ArUco code. When only one ArUco code is used, the long-distance angle estimation fluctuates greatly, so we choose to use two ArUco codes, as shown in the figure. This is a image of a real scene, but the non-critical content has been erased. The current ArUco codes are about 30m away from the camera.
The size of each code is 0.6m x 0.6m. Place them on the same plane (at least I think they are coplanar). Taking the center of ArUco on the left as the center of the world coordinates, the world coordinates of each corner point are known. The world coordinates of the ArUco code on the right are obtained by measurement.
Use cv2.aruco.ArucoDetector
to detect the corners of two ArUco codes, a total of 8 corners, and estimate the pose through the following function.
# world_points.shape=(8,3), pixel_points.shape=(8,2)
_, rvec, tvec = cv2.solvePnP(world_points, pixel_points, K, distor, flags=cv2.SOLVEPNP_SQPNP)
The trajectory of the camera should be similar to the picture below, an arc.
But the actual calculated trajectory is as shown in the figure below, which is more like a straight line without arc.
There is a high probability that the estimated rvec
is inaccurate, because when I use other sensors to obtain yaw and the tvec
calculated here, the trajectory of the camera drawn is more accurate.
- Is it possible for
cv2.solvePnP
to get a relatively accuratetvec
, but an inaccuratervec
? - Why is the
rvec
obtained bycv2.solvePnP
inaccurate? For my scenario. - Is there any way to solve this problem?
I want to improve the accuracy of the rvec.