The stereo_match.cpp example converts L and R images into disparity and point cloud. I want to adapt this example for compute the disparity and point cloud from 2 consecutive frames of a single calibrated camera. Is it possible? If this example isn't good for my scope, what are the steps for obtain what I want?
disparity map from 2 consecutive frames of a SINGLE calibrated camera. Is it possible?
2.3k views Asked by Fobi AtThere are 3 answers
Disparity map, on stereo systems, is used to obtain depth information - distance to objects in scene. For that, you need the distance between cameras, to be able to convert disparity info to real dimensions.
On the other hand, if you have consecutive frames from a static camera, I suppose you want the differences between them. You can obtain it with an optical flow algorithm. Dense optical flow is calculated for each pixel in image, in the same way as disparity, and it outputs the movement direction and magnitude. Most common OF are sparse - they track only a set of "strong", or well-defined points.
It may make sense to obtain disparity algorithms if you have a static scene, but you move the camera, simulating the two cameras in a stereo rig.
I suppose we cannot calculate an accurate disparity map from a single camera. In computing the disparity map we basically assume that the vertical pixel coordinate in both the images in a stereo rig is same, only the horizontal pixel coordinate changes, but in monocular image sequence, this may not hold true as the camera is moving between two consecutive frames.
Yes if the camera (or scene) is moving