Haar Cascade classifier does not detect faces in simple frontal pictures

6.6k views Asked by At

Trying to do some simple face detection using opencv + python using Haar Cascade Classifier.

Below code perfectly detects faces in image1, image2 but fails to detect in image3

Kindly help me understand what are the reasons for non-detection of face in image3

import numpy as np
import cv2

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')

img = cv2.imread('/home/swiftguy/computer-vision/image3.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

faces = face_cascade.detectMultiScale(gray, 1.3, 3)
for (x,y,w,h) in faces:
    img2 = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
    roi_gray = gray[y:y+h, x:x+w]
    roi_color = img[y:y+h, x:x+w]
    eyes = eye_cascade.detectMultiScale(roi_gray)
    for (ex,ey,ew,eh) in eyes:
            cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)

cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
2

There are 2 answers

2
Totoro On BEST ANSWER

This is mainly due to the high brightness of the face, and the lack of sharp features on kids' faces (mainly around the nose bridge). Histogram equalisation before face detection can improve detection accuracy for images like this.

If your detector needs to work well on such images, one possibility is to train the classifier with a set of similar images.

0
ZdaR On

The haar cascades work on a binary principle, if you go through the documentation, it explains the whole process of face detection, for the reference purposes I am attaching sample images which give a brief introduction.

enter image description here

enter image description here

As you can see from the images, the gray scale image is processed to match the pre-defined patterns, the Black and white boxes simply represent the pixel density of darker and lighter pixels respectively, So the process completely depends upon the brightness of the pixels which form a specific feature or pattern.

To determine if a pixel should be considered as black or white a threshold is set. Now consider the bottom right image in the second snapshot, it uses the obvious fact that, Eyebrows are darker than the skin tone so the area around our eyes can be simplified as BWB(Black White Black) where first B represents the Darker left eyebrow pixels, W represents skin tone between the eyebrows and the last B represents the right eyebrow, However there are many such haar features.

Now coming to your image, The brightness of the image is a bit higher and also the prominent black features are missing, for example: eyebrows, lips, etc. So there is a chance that brightness value of pixels which should constitute the haar features is greater than the threshold and hence some BWB feature may look like WWW feature and hence failing the criteria of a face in the given image.