Currently I am working on an app to capture images of different exposurePointOfInterest. Basically the steps to that are:
- Set focus on point A
- Capture
- Set focus on point B
- Capture
I had to put a redundant for loop between step 1 & 2 and step 3 & 4 to allow some time for the lens to actually focus on the intended points, otherwise both captures at step 2 & 4 would result in the same picture. This works perfectly. But, I believe this is not the best way to solve this problem.
I have tried putting this code instead of the for loop:
[self performSelector:@selector(captureStillImage) withObject:@"Grand Central Dispatch" afterDelay:1.0]
But when I ran it, it ran as if the selector captureStillImage is never executed. Is there anything that I did wrong? Or is there a better solution that anyone can advise me?
The function I call to capture multiple images looks like this:
-(void)captureMultipleImg
{
//CAPTURE FIRST IMAGE WITH EXPOSURE POINT(0,0)
[self continuousExposeAtPoint:CGPointMake(0.0f, 0.0f)];
NSLog(@"Looping..");
for(int i=0; i<100000000;i++){
}
NSLog(@"Finish Looping");
[self captureStillImage];
//CAPTURE FIRST IMAGE WITH EXPOSURE POINT(0,0)
[self continuousExposeAtPoint:CGPointMake(0.5f, 0.5f)];
NSLog(@"Looping..");
for(int i=0; i<100000000;i++){
}
NSLog(@"Finish Looping");
[self captureStillImage];
}
And the code for captureStillImage looks like this:
-(void)captureStillImage
{
AVCaptureConnection *connection = [stillImage connectionWithMediaType:AVMediaTypeVideo];
typedef void(^MyBufBlock)(CMSampleBufferRef, NSError*);
MyBufBlock h = ^(CMSampleBufferRef buf, NSError *err){
NSData *data = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:buf];
[self setToSaveImage:[UIImage imageWithData:data]];
NSLog(@"Saving to Camera Roll..");
//Saving photo to camera roll
UIImageWriteToSavedPhotosAlbum(toSaveImage, self, @selector(image:didFinishSavingWithError:contextInfo:), nil);
toSaveImage = NULL;
};
[stillImage captureStillImageAsynchronouslyFromConnection:connection completionHandler:h];
}
The code for continuousExposeAtPoint: function:
-(void)continuousExposeAtPoint:(CGPoint)point
{
if([device isExposurePointOfInterestSupported] && [device isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]){
if([device lockForConfiguration:NULL]){
[device setExposurePointOfInterest:point];
[device setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
[device unlockForConfiguration];
NSLog(@"Exposure point of intereset has been set to (%f,%f)",point.x, point.y);
}
}
}
Thanks in advance!
I'm going out of a limb here, since I would like to suggest a different approach which completely avoids "busy waiting" or "run loop waiting".
If I understood the camera correctly, it may take a certain duration until after the exposure point has been set by the camera. There is the property
adjustingFocus
which reflects this state of the camera. This property is KVO compliant and we can use KVO to observe its value.So, the idea is to set the exposure point, and then observe the property
adjustingFocus
. When it's value changes toNO
, the camera is finished setting the exposure point.Now, we can leverage KVO to call a completion hander immediately after the setting is complete. Your method to setup the exposure point becomes asynchronous with a completion handler:
Assuming you have properly implemented KVO in the method above you can use it as follows:
Edit:
Now, method
captureMultipleImg
became asynchronous as well.Note:
Thus, in order to let the call-site know when its underlying asynchronous task is finished, we may provide a completion handler:
A button action may be implemented as follows:
Edit:
For a jump start, you may implement the KVO and your method as shown below. Caution: not tested!
The caveat here is, that KVO is difficult to setup. But once you managed to wrap it into a method with a completion handler it looks much nicer ;)