iOS face detection in portrait

981 views Asked by At

I am having trouble with CoreImage's face detection. My app uses the front camera to take a portrait photo of a face. Here is my setup:

CIImage* image = [CIImage imageWithCGImage:self.photo.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                          context:context options:@{CIDetectorAccuracyHigh:CIDetectorAccuracy}];
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray* features = [detector featuresInImage:image options:imageOptions];


for (CIFaceFeature *f in features)
{
    NSLog(@"%@",NSStringFromCGRect(f.bounds));

    if (f.hasLeftEyePosition)
        NSLog(@"Left eye %.0f %.0f", f.leftEyePosition.x, f.leftEyePosition.y);

    if (f.hasRightEyePosition)
        NSLog(@"Right eye %.0f %.0f", f.rightEyePosition.x, f.rightEyePosition.y);

    if (f.hasMouthPosition)
        NSLog(@"Mouth %.0f %.0f", f.mouthPosition.x, f.mouthPosition.y);
}

Here is my console output:

{{437.5, 170}, {625, 625}}
Left eye 628 371
Right eye 639 594
Mouth 906 482

There are 2 problems with this:

  1. These coordinates are clearly not using the view of my app (320 X 568)
  2. They seem to be the wrong orientation. The eyes should have about the same y level, but instead they have the same x level

How can I correct these issues?

1

There are 1 answers

0
Olotiar On BEST ANSWER

CoreImage's face detector work in the image coordinate space, not the view space. So the coordinates that are returned to you are in pixels in the image, your view.

Here's a tutorial on what theses coordinate spaces are, and how to convert from one another. This should clear up things for you.

As far as orientation goes : you got it right, it might be reversed.

When the user takes a picture, whether it is landscape or portrait, the actual image written on disk is always the same dimensions. It only sets a flag somewhere in the file that tells which orientation it should be displayed in (the Exif.Image.Orientation to be precise) , flag that the UIImageView respects, but that is lost when you convert to CGImage and then CIImage.

You can know whether or not to flip x and y values by looking at the original UIImage's imageOrientation property. If you wanna learn more on what this flag is exactly, and how a surprisingly large number of people get it wrong, head over to here