I try to find 3D coordinate (in world )of abject that I know his coordinate in image .so after some research in internet I succed to find X and Y cordinate. but I can't find Z
that is my program in openCV
void drawAndCalcul3D(int x,int y,Mat& frame)
{
cv::Mat uvPoint = cv::Mat::ones(3,1,cv::DataType<double>::type);
uvPoint.at<double>(0,0) = x; //got this point using mouse callback
uvPoint.at<double>(1,0) = y;
std::vector<cv::Point2f> imagePoints;
std::vector<cv::Point3f> objectPoints;
cv::Mat rotationMatrix(3,3,cv::DataType<double>::type);
Mat cameraMatrix(3,3,DataType<double>::type);
setIdentity(cameraMatrix);
cameraMatrix.at<double>(0,0)=493.415; //my Camera matrix
cameraMatrix.at<double>(0,1)=0;
cameraMatrix.at<double>(0,2)=319.5;
cameraMatrix.at<double>(1,0)=0;
cameraMatrix.at<double>(1,1)=493.415;
cameraMatrix.at<double>(1,2)=179.5;
cameraMatrix.at<double>(2,0)=0;
cameraMatrix.at<double>(2,1)=0;
cameraMatrix.at<double>(2,2)=1;
// extrinsic parameters rvec and tvect : rotation and translation matrix
Mat rvec(3,1,cv::DataType<double>::type);
rvec.at<double>(0)=-0.1408;
rvec.at<double>(1)=3.011;
rvec.at<double>(2)=-0.171147;
//
cv::Mat tvec(3,1,cv::DataType<double>::type);
tvec.at<double>(0)=15.84;
tvec.at<double>(1)=-64.67;
tvec.at<double>(2)=274.584;
cv::Rodrigues(rvec,rotationMatrix);
//
cv::Mat tempMat, tempMat2;
double s;
tempMat = rotationMatrix.inv() * cameraMatrix.inv() * uvPoint;
tempMat2 = rotationMatrix.inv() * tvec;
s =tempMat2.at<double>(2,0); //12 represents the height Zconst
s /= tempMat.at<double>(2,0);
Mat exp=rotationMatrix.inv() * (s * cameraMatrix.inv() * uvPoint - tvec);
cv::putText(frame,intToString(exp.at<double>(0))+ " , " + intToString(exp.at<double>(1))+ " , " + intToString(exp.at<double>(2)),cv::Point(x,y+20),2,1,Scalar(0,255,0));
}
can any one know how can I find the depth of object (Z) ?
I use the same program her Computing x,y coordinate (3D) from image point I don't know how I can change it to find Z when it's not constant
x, y image point only determines a ray from the camera centers through the image point. It has infinite number of possible z and when you multiply images point with inverse matrices you will get an equation of a ray or a line. It is impossible to get 3D from a single image point using a single camera. Unless of course you make some strong assumptions such as object shape or size or use extra knowledge such as monocular cues. Examples of such cues are blur/focus, brightness/shading, texture size, familiar size of other objects nearby, etc.
Another example - if you know an equation of the plane where the object lies you can intersect this plane with the ray and arrive at a unique 3D point. Still another example, say, you use an accelerometer in a cell phone which gives you an angle where camera looks relative to the gravity vector. You also know the approximate height at which you hold a cell phone (1.5m above the ground). Than you can easily calculates 3D points on the ground as intersection of your rays with known plane, see below. The formula gives Z for the point in the center of the image. For other points you have to add extra angle formed by off-center pixels. This will allow you to completely restore 3D of the ground plane.