I am trying to isolate moving objects from a moving camera so that I can later apply some further processing algorithms to them, but I seem to have become a little stuck.
So far I am working with OpenCV and getting sparse optical flow from PyrLKOpticalFlow. The general idea that I was working from was finding the features that were moving differently from the background points in the image, then finding clusters of these differently-moving features to be counted as moving objects for further tracking/processing. My problem is that while I have found a few academic papers that used a strategy like this, thus far I haven't been able to find a simple way to accomplish it for myself.
What would be a good method for using this optical flow data to detect moving objects from a moving camera? Is this even the best approach to be taking, or is there some simpler approach that I may be overlooking?
I managed to find a method that more or less does what I want in OpenCV.
After finding the sparse optical flow points between two consecutive images with GoodFeaturesToTrackDetector and PyrLKOpticalFlow (giving me prevPts and nextPts), I use findHomography with RANSAC to estimate the motion due to camera movement while excluding the outliers due to independently moving objects. I then used perspectiveTransform to warp the prevPts to account for the camera motion (giving me warpedPts). I can then compare the warpedPts to the nextPts in order to find moving objects.
The end result is that even with the camera moving there is not much change between a point in warpedPts and nextPts if the object is stationary, while there is a significant change when the tracked points are on a moving object. From there is is just a matter of grouping the moving points on the basis of proximity and similarity of movement.