Estimate camera orientation from ground 3D points?

380 views Asked by At

Given a set of 3D points in camera's perspective corresponding to a planar surface (ground), is there any fast efficient method to find the orientation of the plane regarding the camera's plane? Or is it only possible by running heavier "surface matching" algorithms on the point cloud?

I've tried to use estimateAffine3D and findHomography, but my main limitation is that I don't have the point coordinates on the surface plane - I can only select a set of points from the depth images and thus must work from a set of 3D points in the camera frame.

I've written a simple geometric approach that takes a couple of points and computes vertical and horizontal angles based on depth measurement, but I fear this is both not very robust nor very precise.

EDIT: Following the suggestion by @Micka, I've attempted to fit the points to a 2D plane on the camera's frame, with the following function:

#include <opencv2/opencv.hpp>

//------------------------------------------------------------------------------
/// @brief      Fits a set of 3D points to a 2D plane, by solving a system of linear equations of type aX + bY + cZ + d = 0
///
/// @param[in]  points             The points
///
/// @return     4x1 Mat with plane equations coefficients [a, b, c, d]
///
cv::Mat fitPlane(const std::vector< cv::Point3d >& points) {
    // plane equation: aX + bY + cZ + d = 0
    // assuming c=-1 ->  aX + bY + d = z

    cv::Mat xys = cv::Mat::ones(points.size(), 3, CV_64FC1);
    cv::Mat zs  = cv::Mat::ones(points.size(), 1, CV_64FC1);

    // populate left and right hand matrices
    for (int idx = 0; idx < points.size(); idx++) {
        xys.at< double >(idx, 0) = points[idx].x;
        xys.at< double >(idx, 1) = points[idx].y;
        zs.at< double >(idx, 0)  = points[idx].z;
    }

    // coeff mat
    cv::Mat coeff(3, 1, CV_64FC1);

    // problem is now xys * coeff = zs
    // solving using SVD should output coeff
    cv::SVD svd(xys);
    svd.backSubst(zs, coeff);

    // alternative approach -> requires mat with 3D coordinates & additional col
    // solves xyzs * coeff = 0
    // cv::SVD::solveZ(xyzs, coeff);  // @note: data type must be double (CV_64FC1)

    // check result w/ input coordinates (plane equation should output null or very small values)
    double a = coeff.at< double >(0);
    double b = coeff.at< double >(1);
    double d = coeff.at< double >(2);
    for (auto& point : points) {
        std::cout << a * point.x + b * point.y + d - point.z << std::endl;
    }

    return coeff;

}

For simplicity purposes, it is assumed that the camera is properly calibrated and that 3D reconstruction is correct - something I already validated previously and therefore out of the scope of this issue. I use the mouse to select points on a depth/color frame pair, reconstruct the 3D coordinates and pass them into the function above.

I've also tried other approaches beyond cv::SVD::solveZ(), such as inverting xyz with cv::invert(), and with cv::solve(), but it always ended in either ridiculously small values or runtime errors regarding matrix size and/or type.

0

There are 0 answers