I am using CIDetector to detect faces, then using OpenCV on the lower half of each face to detect the size of any smiles. I am using the below code to create the cv::mat which OpenCV can perform the detection on, as you can see the image goes through the steps CIImage
-> Cropped CIImage
-> NSBitmapImage
-> CGImage
-> cv::mat
.
- (void)OpenCVdetectSmilesIn:(CIFaceFeature *)faceFeature usingImage:ciFrameImage
{
CGRect lowerFaceRectFull = faceFeature.bounds;
lowerFaceRectFull.size.height *=0.5;
CIImage *lowerFaceImageFull = [ciFrameImage imageByCroppingToRect:lowerFaceRectFull];
NSBitmapImageRep* rep = [[[NSBitmapImageRep alloc] initWithCIImage:lowerFaceImageFull] autorelease];
CGImage *lowerFaceImageFullCG = rep.CGImage;
//TODO: find alternative method of creating cv::mat from ciFrameImage.
std::vector<cv::Rect> smileObjects;
cv::Mat frame_gray;
CGColorSpaceRef colorSpace = CGImageGetColorSpace(lowerFaceImageFullCG);
CGFloat cols = lowerFaceRectFull.size.width;
CGFloat rows = lowerFaceRectFull.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), lowerFaceImageFullCG);
CGContextRelease(contextRef);
cvtColor( cvMat, frame_gray, CV_BGR2GRAY );
equalizeHist( frame_gray, frame_gray );
smileCascade.detectMultiScale( frame_gray, smileObjects, 1.1, 0, 0 | CV_HAAR_SCALE_IMAGE, cv::Size(30, 30) );
It is working, but I get a crash every now and then in the NSBitmapImageRep
line, I cannot find a pattern to reproduce the crash though. I am not sure about this but some sort of "instinct" is telling me that my method of creating the cv::mat image from the CIImage is messy and inefficient - not to mention I suspect that the CPU is being unnecessarily used for the NSBitmapImageRep step, whereas all of the others remain in the GPU... Am I right?
Does anyone know a better method of creating the cv::mat from a cropped frame?
Note that ciFrameImage comes from the sample buffer delegate method:
CIImage *ciFrameImage = [CIImage imageWithCVImageBuffer:cvFrameBuffer options:settings];
Found a solution to get rid of the crash: use
createCGImage:fromRect
to skip theNSBitmapImageRef
step:make sure to
after you're done with it.