I'm going to use the hog feature descriptor combined with an SVM classifier for an object-detection project. The hog provided in scikit-image leads to very good results in the classification phase. However, it runs very slowly (20s per image at hand). On the other hand, the OpenCV version is very fast (0.3s per image). The problem is that although I have used the same parameters for both the hog versions, the results are different from each other. The parameters that I used for each version are as follows:
OpenCV version:
winSize = (4,4)
blockSize = (2,2)
blockStride = (2,2)
cellSize = (2,2)
nbins = 5
hog = cv2.HOGDescriptor(winSize,blockSize,blockStride,cellSize,nbins)
hist = hog.compute(image)
scikit-image version:
hist = hog(image, orientations=5, pixels_per_cell=(2,2),cells_per_block=(2, 2), block_norm='L2-Hys')
The hog resulting from OpenCV:
[[ 0. ]
[ 0. ]
[ 0.99502486]
...,
[ 0.99502486]
[ 0. ]
[ 0. ]]
The hog resulting from scikit-image:
[[ 0. ]
[ 0. ]
[ 0.16415654]
...,
[ 0.14253933]
[ 0. ]
[ 0. ]]
It's worth noting that the number of features generated by both the descriptors is the same.
What is the problem with OpenCV hog that doesn't generate the same results as that of scikit-image?
The implementation of the HOG paper in scikit-image is different from that of opencv. The last time I went through the source code I noticed that, among other things, the normalization performed by scikit-image is not the one recommended by the paper.
I would recommend using the one provided by opencv since it let's you change several parameters and is much closer to the implementation of the HOG paper. Also, as you found out for yourself, the opencv implementation is optimized and much faster.