Understanding OpenCV projectPoints: reference points and origins

178 views Asked by At

I have been battling with projectPoints for days without understanding the basic reference point of the function. I find it hard to find the word coordinates and the reference points of all the inputs.

Here a small example.

    import numpy as np
    import cv2 as cv
    camera_matrix = np.array([
        [1062.39, 0., 943.93],
        [0., 1062.66, 560.88],
        [0., 0., 1.]
        ])

    points_3d = xyz_to_np_point(10, 0, 0)
    rvec = np.zeros((3, 1), np.float32)
    tvec = np.zeros((3, 1), np.float32)
    dist_coeffs = np.zeros((5, 1), np.float32)

    points_2d, _ = cv.projectPoints(points_3d,
                                    rvec, tvec,
                                    camera_matrix,
                                    dist_coeffs)

    print(points_2d)

Camera has no rotation, therefore rvec=(0,0,0) and we take the camera position as origin of our world making tvec=(0,0,0). The object we want to calculate the 3D to 2D is then positioned in front of the camera at 10 units.

Here an illustration:

enter image description here

The output of the code is then (11567.83 560.88) and not (0,0) as I would expect.

A longer explanation. I am trying to project the location of ships in my image. I have the GPS location of both my camera and the ships. I transform the GPS coordinates into a plane by taking the distance on X axis (X points at east) and distance on Y axis (Y points at north). Since the ships are allocated over the sea level, I take the 3D point to be projected as (X, Y, 0)

For the extrinsic parameters of the camera, I assume the camera is again the reference world and the tvec is only considering the height of the camera over the sea level (tvec = (0,0,4)). As the rotation I have a absolute IMU so I can calculate the rvec over my X acis (for simplicity the camera is parallel to XY plane).

I have done a camera calibration and obtained my camera matrix and my distortion coefficient. Not sure how to test the camera matrix, but I see that undistortioning my images with the distortion coefficient my images obtain linear lanes and remove the distorion.

Here code of my problem with some examples.

import numpy as np
import cv2 as cv
from scipy.spatial.transform import Rotation as R
from haversine import haversine, Unit

def distance_between_coordinates(c1, c2):
    x = haversine(c1, (c1[0], c2[1]), Unit.METERS)
    if c1[1] > c2[1]:
        x = -x
    y = haversine(c1, (c2[0], c1[1]), Unit.METERS)
    if c1[1] > c2[1]:
        y = -y
    dist = haversine(c1, c2, Unit.METERS)
    return dist, x, y


def rotvec_from_euler(orientation):
    r = R.from_euler('xyz', [orientation[0], orientation[1], orientation[2]], degrees=True)
    return r.as_rotvec()


if __name__ == '__main__':
    camera_matrix = np.array([
        [1062.39, 0., 943.93],
        [0., 1062.66, 560.88],
        [0., 0., 1.]
        ])

    dist_coeffs = np.array([-0.33520254,  0.14872426,  0.00057997, -0.00053154, -0.03600385])

    camera_p = (37.4543785, 126.59113666666666)
    ship_p = (37.448312, 126.5781)

    # Other ships near to the previous one.
    # ship_p = (37.450693, 126.577617)
    # ship_p = (37.4509, 126.58565)
    # ship_p = (37.448635, 126.578202)
    camera_orientation = (206.6925, 0, 0) # Euler orientation.

    rvec = rotvec_from_euler(camera_orientation)
    tvec = np.zeros((3, 1), np.float32)

    _, x, y = distance_between_coordinates(camera_p, ship_p)
    points_3d = np.array([[[x, y, 0]]], np.float32)

    points_2d, _ = cv.projectPoints(points_3d,
                                    rvec, tvec,
                                    camera_matrix,
                                    dist_coeffs)
    print(points_2d)

I got some more coordinates that are ships nearby in the same direction and should be hitting close to the center of the camera. If you try also other the prediction from projectPoints changes drastically.

For clarity I added an illustration of coordinates systems of the second code block.

enter image description here

1

There are 1 answers

2
fana On BEST ANSWER

You should check about the definition of coordinate-system in OpenCV at first.

Without rotation, the direction of X+ is right (from the camera) not forward. Therefore, trying to project the point (10,0,0) is nonsense.

And, if you employ correct input (0,0,10), output will be (943.93, 560.88), not (0,0).