I'm doing a project on panorama creation using python and openCV. The methodology which I'm following in order to create the panorama contains the following steps, working in pairs of images;

  1. find the keypoints of each of the 2 images with SIFT detector
  2. match the keypoints
  3. find the homography matrix that transforms the second image to the wanted image, using the RANSAC algorithm
  4. stitch the 2 images

The problem is that the method cv2.warpPerspective(imgs[1], M, shape) returns an image slightly rotated to the left (imgs[1] is the right image, M is the homography matrix and shape is the shape of the result image) which causes the stiched image not to be vertical to the horizontal axis and therefore a black frame appears on the top right corner of the stiched image. when I try to stich the next image into this first part of the panorama, that black frame is considered part of the left image and therefore it appears between the images.

Most people using this methodology to create panoramas follow exactly the structure below, but for them the result is a an image whose left and right sides are vertical and not a rotated image.

img1 = cv2.cvtColor(imgs[1], cv2.COLOR_BGR2GRAY) #right image in gray
img2 = cv2.cvtColor(imgs[0], cv2.COLOR_BGR2GRAY) #left image in gray

#create sift detector object
sift = cv2.xfeatures2d.SIFT_create() 

kp1, des1 = sift.detectAndCompute(img1, None) #find keypoints and descriptor for right image
kp2, des2 = sift.detectAndCompute(img2, None) #find keypoints and descriptor for left image

#create BFMatcher object
match = cv2.BFMatcher()
#find the matches between the 2 images using BFMatcher object
matches = match.knnMatch(des1, des2, k=2)

good = []
#find and store the good matches
for m,n in matches:
    if m.distance < 0.8*n.distance:
        good.append(m)

MIN_MATCH_COUNT = 10
#if number of mathces is higher than 10
if len(good) > MIN_MATCH_COUNT: 
    #reshape keypoint lists and declare the source and destination keypoints 
    src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ])
    dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ])
    
    #find the homography matrix M
    M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
    #h,w = img1.shape #heigh and width of right image
    #pts = np.float32([[0,0], [0,h-1], [w-1, h-1], [w-1,0]]).reshape(-1,1,2)
    #find the perspective transformation using the homography matrix
    #transform = cv2.perspectiveTransform(pts, M)
    
    #the line below is to show the overlapping region - optional
    #img2 = cv2.polylines(img2, [np.int32(transform)], True, 255, 3, cv2.LINE_AA)
    
    #plt.imshow(img2)
else:
    print("Not enough matches are found.")

#connect the 2 images
dst = cv2.warpPerspective(imgs[1], M, (imgs[0].shape[1] + imgs[1].shape[1], imgs[0].shape[0]))
dst[0:imgs[0].shape[0], 0:imgs[0].shape[1]] = imgs[0]

#show stiched image in RGB
plt.imshow(cv2.cvtColor(dst, cv2.COLOR_BGR2RGB))

This is the link to my github repository for this project https://github.com/angepl/panoramaTest.git

0

There are 0 answers