I am using the following code to extract depth map (by following Apple's own example):
- (nullable AVDepthData *)depthDataFromImageData:(nonnull NSData *)imageData orientation:(CGImagePropertyOrientation)orientation {
AVDepthData *depthData = nil;
CGImageSourceRef imageSource = CGImageSourceCreateWithData((CFDataRef)imageData, NULL);
if (imageSource) {
NSDictionary *auxDataDictionary = (__bridge NSDictionary *)CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity);
if (auxDataDictionary) {
depthData = [[AVDepthData depthDataFromDictionaryRepresentation:auxDataDictionary error:NULL] depthDataByApplyingExifOrientation:orientation];
}
CFRelease(imageSource);
}
return depthData;
}
And I call this from:
[[PHAssetResourceManager defaultManager] requestDataForAssetResource:[PHAssetResource assetResourcesForAsset:asset].firstObject options:nil dataReceivedHandler:^(NSData * _Nonnull data) {
AVDepthData *depthData = [self depthDataFromImageData:data orientation:[self CGImagePropertyOrientationForUIImageOrientation:pickedUiImageOrientation]];
CIImage *image = [CIImage imageWithDepthData:depthData];
UIImage *uiImage = [UIImage imageWithCIImage:image];
UIGraphicsBeginImageContext(uiImage.size);
[uiImage drawInRect:CGRectMake(0, 0, uiImage.size.width, uiImage.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *pngData = UIImagePNGRepresentation(newImage);
UIImage* pngImage = [UIImage imageWithData:pngData]; // rewrap
UIImageWriteToSavedPhotosAlbum(pngImage, nil, nil, nil);
} completionHandler:^(NSError * _Nullable error) {
}];
Here is the result: it's a low quality (and rotated but let's put orientation aside for now) image:
Then I've transferred the original HEIC file, opened in Photoshop, went to Channels, and selected depth map as below:
Here is the result:
It's a higher resolution/quality, correctly oriented depth map. Why is the code (actually Apple's own code at https://developer.apple.com/documentation/avfoundation/avdepthdata/2881221-depthdatafromdictionaryrepresent?language=objc) resulting in lower-quality result?
I've found the issue. Actually, it was hiding in plain sight. What is obtained from the
+[AVDepthData depthDataFromDictionaryRepresentation:error:]
method returns disparity data. I've converted it to depth using the following code:(Haven't tried but 16-bit Depth,
kCVPixelFormatType_DepthFloat16
, should also work well)After converting disparity to depth, the image is exactly the same as in Photoshop. I should have woken up as I was using
CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity);
(note the "disparity" in the end) and Photoshop was clearly saying "depth map", converting disparity to depth (or just somehow reading as depth, I honestly don't know the physical encoding, maybe iOS was converting depth to disparity when I was copying the aux data in the first place) on the fly.Side note: I've also solved the orientation issue by creating the image source directly from
[PHAsset requestContentEditingInputWithOptions:completionHandler:]
method and passing thecontentEditingInput.fullSizeImageURL
intoCGImageSourceCreateWithURL
method. It took care of the orientation.