I am having trouble with the findHomography
function in OpenCV, it is not translating even my corner points correctly.
Here is the code I have to calculate the homography.
std::vector<cv::Point> srcPts = board.getCornerPoints();
// Image size : 640x480
// [158, 110;
// 488, 114;
// 507, 296;
// 100, 284]
const std::vector<cv::Point2f> destPts {
cv::Point2f(-0.5, 0.5),
cv::Point2f(0.5, 0.5),
cv::Point2f(0.5, -0.5),
cv::Point2f(-0.5, -0.5),
};
cv::Mat H = cv::findHomography(srcPts, destPts);
The goal is to transform the points in the corners of the selected object to a plane between -0.5 and 0.5. However, when trying on the upper left corner with this code :
cv::Point pt = boardDetector.getCornerPoints()[0];
cv::Mat test = (cv::Mat_<double>(3, 1) << pt.x, pt.y, 1);
cv::Mat H_pt = H * test;
Instead of getting (-0.5, 0.5)
I'm getting (-0.59675, 0.59675)
.
When changing the coordinates of the corner points to camera coordinates by multiplying them with the camera matrix the results are even worse too.
I have tried specifying the method used by findHomography
myself but it didn't change anything.
Any ideas on why the result is inaccurate?
Thanks
After multiplication, you still need to project your 2D point onto the
w=1
plane (vector being(x,y,w)
,w
being the last coordinate). You do that by dividing the vector by that value in the vector.Or just use
cv::perspectiveTransform(points, mat)
.