I'm developing a 3D scanner that works on an iPad(pro 6th generation, 12.9-inch).
I tried ObjectCaptureSession and tried Photogrammetry with HEIC pictures made from ObjectCaptureSession, which was successful.
But using manually captured picture fails and I don't know how to do.
I tried to take a picture using AVCaptureSession and put the data into the PhotogrammetrySample
structure, but it failed.
settings are:
- camera: back Lidar camera,
- image format:
kCVPicelFormatType_32BGRA
(failed with crash) orhevc
(just failed) image - depth format:
kCVPixelFormatType_DisparityFloat32
orkCVPixelFormatType_DepthFloat32
- photoSetting:
isDepthDataDeliveryEnabled = true
,isDepthDataFiltered = false
,embeded = true
So.. this time, I tried with HEIC image, but it also fails.
It succeed only when the metaData is replaced with nil(AVCapturePhotoFileDataRepresentationCustomizer.replacementDepthData
returns nil)
but it seems to be no depthData or gravity data were used for the photogrammetry
And I also tested some sample codes provided by apple:
- https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app
- https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera
- https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture
What should I do to make Photogrammetry successful?