I am currently working on object classification problem. My objective is to use the SURF descriptors to train MLP based artificial neural network in opencv and generate a model for object classification. So far, I have achieved the following:
I am computing SURF keypoints using the following code:
vector<KeyPoint> computeSURFKeypoints(Mat image) {
SurfFeatureDetector surfdetector(400, 4, 2, true, false);
vector<KeyPoint> keypoints;
surfdetector.detect(image, keypoints);
return keypoints;
}
I compute the SURF descriptors over these keypoints using the following code:
Mat computeSURFDescriptors(Mat image, vector<KeyPoint> keypoints) {
SurfDescriptorExtractor extractor;
Mat descriptors;
extractor.compute(image, keypoints, descriptors);
return descriptors;
}
The problem which I am facing is that the size of the descriptor varies from image to image. The descriptor contains 64 elements FOR EACH FEATURE POINT. For the purpose of training the neural network, I want the size of descriptor to be fixed. For that, I am using PCA to reduce the descriptor size as follows:
Mat projection_result;
PCA pca(descriptors, Mat(), CV_PCA_DATA_AS_COL, 64);
pca.project(descriptors,projection_result);
return projection_result;
In doing so, I am able to reduce the dimensions of descriptor, but the selected feature points are not representative of the image and they result in poor matching results. How can I reduce the dimension of descriptor by retaining good feature points? Any help will be appreciated.
You can use the response value of each keypoint returned in feature detection. Sorting the keypoints according to their response value should be the way to go, however I have never tested this.
See: https://github.com/Itseez/opencv/blob/master/modules/core/include/opencv2/core/types.hpp#L697