I am using opencv-python to align two images (scanned by same devices, so they should be able to be aligned with each other by only translation and rotation, without scaling.). Considering the images are actually scanned by the same machine, there should be no scaling or only a tiny little bit scaling involved in the the feature matching and transformation estimation.
I found an answer on opencv forum, which has been posted years ago.
It suggested to set nlevels to 1 while using orb.
So I modified my code to be like this.
if __name__ == "__main__": full_affine = False try_cuda = True match_conf = 0.3 finder = cv2.ORB_create(scaleFactor=1.2, nlevels=1, edgeThreshold=31, firstLevel=0, WTA_K=2, scoreType=cv2.ORB_HARRIS_SCORE, nfeatures=100, patchSize=31) matcher = cv2.detail_AffineBestOf2NearestMatcher(full_affine, try_cuda, match_conf) source_img = cv2.imread("000010.jpg") target_img = cv2.imread("000000.jpg") source_feature = cv2.detail.computeImageFeatures2(featuresFinder=finder, image=source_img) target_feature = cv2.detail.computeImageFeatures2(featuresFinder=finder, image=target_img) matching_result = matcher.apply(source_feature, target_feature) print(type(matching_result)) print("matching_result.confidence") print(matching_result.confidence) print("matching_result.H") print(matching_result.H) print("matching_result.dst_img_idx") print(matching_result.dst_img_idx) print("matching_result.src_img_idx") print(matching_result.src_img_idx) t = matching_result.H print(type(t))
According to the python api of opencv, the matching_result.H is the estimated 2D transformation
I expect the out put to be an affine transformation matrix without scaling.
However, it obviously scales the images.
My guess is the previous suggested solution of setting nlevels=1 did limit the feature calculation, but the transformation estimation still did the scaling.