I am doing a project in opencv to detect obstacle in the path of a blind person using stereo calibration. I have calculated the disparity map correctly. Now to find the distance of obstacle from the camera, I want its 3D coordinates [X,Y,Z] , which I am guessing can be found by reprojectImageTo3D(), but I dont have the Q matrix to use in this function because the Q matrix I am getting from stereoRectify() is coming null probably because I used pre calibrated images. Although I do have the inrinsic and extrinsic parameters of my camera. So my question is that how can I manually create the Q matrix to directly use in the function reprojectImageTo3D(), if I know the focal length, baseline and everything else about my camera? What is the basic format of the Q matrix?
Q matrix for the reprojectImageTo3D function in opencv
27.1k views Asked by g.alisha12 At
2
There are 2 answers
4
Javier Abellán Ferrer
On
if you want to create directly Q matrix:
cv::Mat Q;
Q.at<double>(0,0)=1.0;
Q.at<double>(0,1)=0.0;
Q.at<double>(0,2)=0.0;
Q.at<double>(0,3)=-160; //cx
Q.at<double>(1,0)=0.0;
Q.at<double>(1,1)=1.0;
Q.at<double>(1,2)=0.0;
Q.at<double>(1,3)=-120; //cy
Q.at<double>(2,0)=0.0;
Q.at<double>(2,1)=0.0;
Q.at<double>(2,2)=0.0;
Q.at<double>(2,3)=348.087; //Focal
Q.at<double>(3,0)=0.0;
Q.at<double>(3,1)=0.0;
Q.at<double>(3,2)=1.0/95; //1.0/BaseLine
Q.at<double>(3,3)=0.0; //cx - cx'
But you should calibrate both cameras and then get Q matrix from cv::stereoRectify. Be carefull, read Q matrix as double values.
Related Questions in OPENCV
- Creating multiple instances of a class with different initializing values in Flutter
- OpenCV2 on CLion
- How to Draw Chinese Characters on a Picture with OpenCV
- Python cv2 imwrite() - RGB or BGR
- AttributeError: 'Sequential' object has no attribute 'predict_classes'. Did you mean: 'predict_step'?
- Search for an icon on an image OpenCV
- DJI Tello won't follow me
- Python OpenCV and PyGame threading issue
- Need help in Converting Python script (Uses Yolo's pose estimation) to an android app
- Line Segmentation Problem: How to detect lines and draw bounding box of that line on handwritten letters Using CV2
- Configure CmakeLists.txt to avoid manually copying dlls
- How to detect the exact boundary of a Sudoku using OpenCV when there are multiple external boundaries?
- AttributeError: 'Results' object has no attribute 'box'. can anyone explain this?
- How to generate a VPI warpmap for polynomial distortion correction?
- I am trying to make a project of object detection on kaggle notebook using yolo. and i am facing this error. here is my code and my error
Related Questions in DEPTH
- Kinect v1 Camera not connect with Kinect SDK
- Qt Vulkan: How to enable depth test?
- Magick -depth 16 command changes alpha channel data
- Check if the number of nodes at any depth of the binary tree is equal to the height of the tree
- ARCore does not work in certain Android environments
- Extracting max bottom depth in NetCDF files
- Disparity Map obtained through StereoSGBM is flickering
- how to Convert MiDaS depth prediction to real-world distance?
- how to count the number of keys in an json in front of a nested scalar wih jq
- PHP, RecursiveIteratorIterator, depth of 2 only
- Get the depth of a List in flutter/dart
- How to get the physical size - width,height of captured image?
- How can I add the correct depth value to a nested json object when children have multiple parents
- realsense d415 with yolov8 trained model
- Depth camera : How can I determine the origin of reference for the depth camera?
Related Questions in STEREO-3D
- Stereo system Depth map (NN) 3D reconstruction with two cameras giving bad mesh
- OpenCV 3D Point Cloud rendering in strange ways
- Real object size measurement using depth image
- Opencv-Triangulation function giving nonsense results
- Image rectification for depth perception is not correct
- In a stereo camera setting, how to I determine the distance in 3D between - lets say a shelf surface and a hand?
- given an image(maybe name it im1) with unknown camera pose, and its sibling image with known camera pose, how to get the camera pose of im1?
- Using a usb camera and a Raspbery Pi Zero 2W to create a fake stereo image
- I generated a disparity map using OpenCV's Stereo Vision, but all I see is a black screen
- Weird stereo disparity map and point cloud in MATLAB
- Wrong distance estimation in MATLAB from stereo images generated in Blender
- In Vision Pro's RealityKit how can I programmatically use a different texture map for each eye?
- How to make an A-Frame 360 3D Gallery for VR?
- Getting extremely large magnitude XYZ from opencv reprojectImageTo3D
- Disparity Map obtained through StereoSGBM is flickering
Related Questions in 3D-RECONSTRUCTION
- How can I generate a concave hull of 3D points?
- How can i remove background from face image using PRNet and get an image containing only face?
- Getting weird mesh while trying 3d reconstruction
- Problem in running the environment.yml file
- How to Generate 3D Hand Mesh from 2D Skeleton Image or real hand Using Python?
- Converting Front Hand View to Back Hand View or vice versa Using Python
- Surface (mesh) reconstruction from 3 curves (bifurcation)
- How does matalb function pc2surfacemesh find a suitable spherical radius for ball-pivoting algorithm?
- How to Get Started with Gaussian Splatting
- Calculate Essential and Fundamental matrices using a calibrated camera using 4 co-planar point matches
- real time 3d reconstruction using stereo cameras
- How can I merge / align point clouds from scans (Lidar) to reconstruct an object in 3D in Python?
- Does unsupervised stereo matching recover relative depth or real depth?
- COLMAP camera pose estimation issue
- Unable to locate implementation of logistic density distribution in NeuS (neural surface reconstruction method) paper repository
Related Questions in DISPARITY-MAPPING
- OpenCV 3D Point Cloud rendering in strange ways
- Purpose of scaling disparity in stereo vision
- I generated a disparity map using OpenCV's Stereo Vision, but all I see is a black screen
- Weird stereo disparity map and point cloud in MATLAB
- Saving disparity map using OpenCV in PFM format
- Depth representation in disparity map rendered using SGBM is wildly inaccurate
- Why Imagemagick Composite Displace leaving behind edges, not displacing properly
- In KITTI dataset, why is it required to divide np.asarray(PIL.Image.open(img_path)) by 256 to get groundtruth depth map?
- How can I accurately fuse two rectified stereo images in MATLAB for precise overlap?
- How do dense stereo matching algorithms, such as LEAStereo make disparity predictions at occluded regions?
- Can I share weights between keras layers but have other parameters differ tf2?
- What are the possible perfect illumination conditions for 3D reconstruction using stereo cameras?
- openCV - Generating disparity map from stereo images
- What does a disparity map in OpenCV tell?
- Why is the colored disparity map only displaying one color?
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
The form of the Q matrix is given as follows:
In that image, cx and cy are the coordinates of the principal point in the left camera (if you did stereo matching with the left camera dominant), c'x is the x-coordinate of the principal point in the right camera (cx and c'x will be the same if you specified the
CV_CALIB_ZERO_DISPARITYflag forstereoRectify()), f is the focal length and Tx is the baseline length (possibly the negative of the baseline length, it's the translation from one optical centre to the other I think).I would suggest having a look at the book Learning OpenCV for more information. It's still based on the older C interface, but does a good job of explaining the underlying theory, and is where I sourced the form of the Q matrix from.