How to use Depth of Field in CIFilter?

623 views Asked by At

I am trying to implement Depth of Field in CIFilter In the Core Image Filter Reference of Apple Developer, it says about inputPoint0 and inputPoint1

The focused region of the image stretches in a line between inputPoint0 and inputPoint1 of the Image.

So I Changed the UI kit coordinate system to Core Image coordinate system and set these points. But the output always appears in all Blurred image or all Focused image. there is no depth.

Can somebody give me a snippet code to implement CIFilter Depth of Field?

and give me some explain about that code? especially related to inputPoint

Below is the code what I am working on.

Depth of Field Method

@implementation CIImage (myCustomExtension)    
+(CIImage *) CIFilterDepthOfField:(CIImage *)inputImage inputPoint0:(CIVector *) inputPoint0 inputPoint1:(CIVector *) inputPoint1 inputSaturation:(CGFloat) inputSaturation inputUnsharpMaskRadius:(CGFloat)inputUnsharpMaskRadius inputUnsharpMaskIntensity:(CGFloat) inputUnsharpMaskIntensity inputRadius:(CGFloat) inputRadius{
    CIFilter *depthOfField = [CIFilter filterWithName:@"CIDepthOfField" withInputParameters:@{kCIInputImageKey:inputImage, @"inputPoint0":inputPoint0, @"inputPoint1":inputPoint1, kCIInputSaturationKey:[NSNumber numberWithFloat:inputSaturation], @"inputUnsharpMaskRadius":[NSNumber numberWithFloat:inputUnsharpMaskRadius], @"inputUnsharpMaskIntensity":[NSNumber numberWithFloat:inputUnsharpMaskIntensity], kCIInputRadiusKey:[NSNumber numberWithFloat:inputRadius]}];

    return [depthOfField outputImage];
}
@end

My implementation in custom class

//Depth Of Field
CIVector *point0 = [CIVector vectorWithX:self.point.x Y:self.point.y];
CIVector *point1 = [CIVector vectorWithX:self.point.x Y:self.point.y+ 300];
outputCIImage = [CIImage CIFilterDepthOfField:outputCIImage inputPoint0:point0 inputPoint1:point1 inputSaturation:1.5 inputUnsharpMaskRadius:0.5 inputUnsharpMaskIntensity:2.5 inputRadius:6];
UIImage *outputUIImage = [UIImage renderCIImageToUIImage:outputCIImage withCIContext:self.ciContext];

self.imageView.image = outputUIImage;

Point is from tap recognizer

- (IBAction)touchPointSet:(UITapGestureRecognizer *)sender {
    //Get point in imageview
    CGPoint point = [sender locationInView:self.imageView];
    //Calculate image scale for image view image
    CGFloat imageScale = fminf(self.imageView.frame.size.width/self.imageView.image.size.width, self.imageView.frame.size.height/self.imageView.image.size.height);
    //Calculate point in image bound
    CGPoint pointInImage = CGPointMake(point.x - ((self.imageView.frame.size.width - self.imageView.image.size.width * imageScale) / 2), point.y - ((self.imageView.frame.size.height - self.imageView.image.size.height * imageScale) / 2));

    //Make transform
    CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
    transform = CGAffineTransformTranslate(transform, 0, -self.imageView.image.size.height * imageScale);

    //Transform point to Core image coordinate
    CGPoint transformedPoint = CGPointApplyAffineTransform(pointInImage, transform);

    CGFloat screenScale = [UIScreen mainScreen].scale;
    self.point = CGPointMake(transformedPoint.x * screenScale, transformedPoint.y * screenScale);
}

This is rendering method CIImage To UIImage

@implementation UIImage (myCustomExtension)
+(UIImage *) renderCIImageToUIImage:(CIImage *) CIImage withCIContext:(CIContext *) CIContext{
    CGImageRef CGImageFromCIImage = [CIContext createCGImage:CIImage fromRect:CIImage.extent];
    UIImage *UIImageFromCGImage = [UIImage imageWithCGImage:CGImageFromCIImage];
    CGImageRelease(CGImageFromCIImage);

    return UIImageFromCGImage;
}
@end
0

There are 0 answers