Whilst learning about Reality Composer I found that it is possible to anchor an image using Reality Composer, meaning if I have an image in real life and a copy of it in the Reality Composer then using that I can build a whole scene right on top of the image. I was wondering, how does the actual anchoring happen?
I have worked before with SIFT keypoint matching, which could be used in this case as well, however, I cannot find how this works in Reality Composer.
The principle of operation is as simple as that:
Reality Composer's scene element called
AnchorEntity
contained in.rcproject
file in RealityKit app conforms to HasAnchoring protocol. When RealityKit app's Artificial Intelligence sees any image thru rear camera, it compares it with the one containing inside reference image folder. If both images are identical, app creates an image-based anchorAnchorEntity
(similar to ARImageAnchor in ARKit) that tethers its corresponding 3D model. Invisible anchor appears in the center of a picture.When you're using image-based anchors in RealityKit apps, you're using a RealityKit's analog of ARImageTrackingConfig that is less processor intensive than ARWorldTrackingConfig.
The difference between
AnchorEntity(.image)
andARImageAnchor
is that RealityKit automatically tracks all its anchors, while ARKit usesrenderer(...)
orsession(...)
methods for updating.