Feature detector and descriptor for low resolution images

4.6k views Asked by At

I am working with low resolution (VGA) and jpg-compressed sequences of images for visual navigation on a mobile robot. At the moment I am using SURF for detecting keypoints and extracting descriptors out of the images, and FLANN for tracking them. I am getting 4000-5000 features per image, and usually 350-450 matches are made per pair of consecutive images, before applying RANSAC (which usually reduces 20% the number of matches)

I am trying to increase the number (and quality) of the matches. I have tried two other detectors: SIFT and ORB. SIFT increases the number of features noticeably (35% more of tracked features, overall), but it is much slower. ORB extracts roughly as many features as SURF, but the matching performance is much poorer (~100 matches, in the best cases). My implementation in opencv of ORB is:

cv::ORB orb = cv::ORB(10000, 1.2f, 8, 31);
orb(frame->img, cv::Mat(), im_keypoints, frame->descriptors);
frame->descriptors.convertTo(frame->descriptors, CV_32F); //so that is the same type as m_dists

And then, when matching:

cv::Mat m_indices(descriptors1.rows, 2, CV_32S);
cv::Mat m_dists(descriptors1.rows, 2, CV_32F);
cv::flann::Index flann_index(descriptors2, cv::flann::KDTreeIndexParams(6));
flann_index.knnSearch(descriptors1, m_indices, m_dists, 2, cv::flann::SearchParams(64) );

What is the best feature detector and extractor when working with low resolution and noisy images? Should I change any parameter in FLANN depending on the feature detector used?

EDIT:

I post some pictures of a fairly easy sequence to track. The pictures are as I give them to the feature detector methods. They have been preprocessed to eliminate some noise (by means of cv::bilateralFilter())

enter image description here Image 2

3

There are 3 answers

0
don_q On

If you have control of the feature you track you can it them rotation invariant and use correlation.

0
Tobias Senst On

In many cases the Pyramidal Lucas Kanade optical flow based method is a good choice. the method has some restrictions e.g. big changes in illuminations. If you use a large window 21x21 or bigger than the tracker should be more robust to noise. You can get features to track from your favorit SIFT,SURF,FAST or GFT feature detector or you initialise them as a regular sampled grid. That gives you the advantage of regular samples motion information from your scene.

4
Richard Wheatley On

I have been working with the ORB feature detection for a few months now. I have found no issue with ORB itself although the authors speak of fine-tuning some of the parameters to make it perform better.

h t t p s://github.com/wher0001/Image-Capture-and-Processing

When I run ORB using your pictures and the standard distance sorting, I get the following picture which obviously has a few bad matches. ORB Matching - standard sorting

I always set the number of nfeatures high (5000) and start from there. It defaults to 500 which is what I used for these pictures. From there you can either change how it sorts as I showed here, can reduce the nfeatures number or even just use that top X number of matches.

import numpy as np
import cv2
from matplotlib import pyplot as plt

img1 = cv2.imread("c:/Users/rwheatley/Desktop/pS8zi.jpg")
img2 = cv2.imread("c:/Users/rwheatley/Desktop/vertrk.jpg")

grey1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
grey2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

# Initiate ORB detector
orb = cv2.ORB_create(nfeatures=5000)

# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(grey1,None)
kp2, des2 = orb.detectAndCompute(grey2,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Match descriptors.
matches = bf.match(des1,des2)

# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)

# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches,None,flags=2)
print(len(matches))

plt.imshow(img3),plt.show()

I then switch to a few different methods that I found beneficial when using a (excuse the term) crappy Dell Webcam. ORB with knnMatching

import numpy as np
import cv2
from matplotlib import pyplot as plt

img1 = cv2.imread("c:/Users/rwheatley/Desktop/pS8zi.jpg")
img2 = cv2.imread("c:/Users/rwheatley/Desktop/vertrk.jpg")

grey1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
grey2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

# Initiate ORB detector
orb = cv2.ORB_create(nfeatures=5000)

# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(grey1,None)
kp2, des2 = orb.detectAndCompute(grey2,None)

# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)

# Apply ratio test
good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:
        good.append([m])

# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=2)
print(len(matches))

plt.imshow(img3),plt.show()

There is a third type of matching that is currently broken even in the latest version of OpenCV. Flann based matching. When that is fixed, I suggest you switch to that or just apply some smarts to the image.

For example, if you add gyros to the system, you can throw out matches outside of the window created by reducing your search window.