I need to implement histogram of oriented gradients for patches in an image (one HOG feature vector per patch, not one HOG for the whole image). I have been using the Matlab code at this link and translating the code to opencv python. I have made some changes to fit it for my purpose, one of the main difference between the Matlab and Python codes is in the way I get the gradient of each cell, while in Matlab I am using filter2 as was used in the above link, in Opencv I use Sobel operator. My problem is that the gradients that these two methods produce are different and I had a hard time fixing it. I tried changing both the image and kernel numerical representations. I also tried using filter2D in opencv, also imfilter in Matlab, but basically none of them worked. Here is the Matlab code for calculating the gradient using filter2:
blockSize=26;
cellSize=floor(blockSize/2);
cellPerBlock=4;
numBins=9;
dim = [444,262];
RGB = imread('testImage.jpg');
img= rgb2gray(RGB);
img = imresize(img, [262,444], 'bilinear', 'Antialiasing',false);
%operators
hx = [-1,0,1];
hy = [-1;0;1] ;
%derivatives
dx = filter2(hx, double(img));
dy = filter2(hy, double(img));
% Remove the 1 pixel border.
dx = dx(2 : (size(dx, 1) - 1), 2 : (size(dx, 2) - 1));
dy = dy(2 : (size(dy, 1) - 1), 2 : (size(dy, 2) - 1));
% Convert the gradient vectors to polar coordinates (angle and magnitude).
ang = atan2(dy, dx);
ang(ang < 0) = ang(ang < 0)+ pi;
mag = ((dy.^2) + (dx.^2)).^.5;
And this is the Python OpenCV version that I wrote using Sobel operator:
blockSize=26
cellSize=int(blockSize/2)
cellPerBlock=4
numBins=9
dim = (444,262)
angDiff=10**-6
img = cv2.imread('3132 2016-04-25 12-35-43-53991.jpg',0)
img = cv2.resize(img, dim, interpolation = cv2.INTER_LINEAR)
sobelx = cv2.Sobel(img.astype(float),cv2.CV_64F,1,0,ksize=1)
sobelx = sobelx[1 : np.shape(sobelx)[0] - 1, 1 : np.shape(sobelx)[1] - 1]
sobely = cv2.Sobel(img.astype(float),cv2.CV_64F,0,1,ksize=1)
sobely = sobely[1 : np.shape(sobely)[0] - 1, 1 : np.shape(sobely)[1] - 1]
mag, ang = cv2.cartToPolar(sobelx, sobely)
ang[ang>np.pi+angDiff]= ang[ang>np.pi+angDiff] - np.pi
Edit: I have followed the post HERE, using bilinear method in Matlab and cv2.INTER_LINEAR in OpenCV, as well as deactivating Antialiasing in Matlab, but still the two resized images do not exactly match. Here is a part of the resized image for a test image in Matlab:
And this is the same part from OpenCV:
2nd Edit: It turns out the way rounding happens causes this difference. So, I changed my OpenCV code to :
img = cv2.resize(img.astype(float), dim, interpolation = cv2.INTER_LINEAR)
and the Matlab:
imresize(double(img), [262,444], 'bilinear', 'Antialiasing',false);
And now both give me the same result.
I think the problem is caused by the derivative methods. I have checked cv2.filter2D in OpenCV, but still the results are different. I hope someone can give me a hint on what possibly could have caused the problem.