I'm using hessgpu for efficiently compute hessian-affine SIFT descriptors using GPUs.
In this project, Devil is used for reading images: if SiftGPU::RunSIFT(const char *imgpath) is called (implemented here), then ilLoadImage is used in GLTexImage.cpp for reading images (as RGB images).
However, in my project I use cv::imread to read images. The project provides SiftGPU::RunSIFT(int width, int height, const void * data, unsigned int gl_format, unsigned int gl_type) to compute descriptors from data provided directly from the user.
So I tried:
cv::Mat img = cv::imread("image.jpg", cv::IMREAD_GRAYSCALE);
sift.RunSIFT(img.cols, img.rows, img.data, GL_LUMINANCE, GL_UNSIGNED_BYTE);
But this produce slightly less keypoints than sift.RunSIFT("image.jpg");. I tried to use:
cv::Mat img = cv::imread("image.jpg");
sift.RunSIFT(img.cols, img.rows, img.data, GL_RGB, GL_UNSIGNED_BYTE);
But this produces 0 keypoints, so something very wrong happens. I think:
iLoadImageuses RGB image, while the only working method that I found up to now usingcv::imreadworks only with grayscale images.- It's possible that devil use a different process to read images than OpenCV, espcecially for RGB images (since using the second approach produced 0 keypoints).
How can I do the equivalent of ilLoadimage using cv::imread?
Changing the
runSiftline to:Solved the problem and produced the exact amount of descriptors.