How does image anchoring work in Reality Composer?

442 views Asked by At

Whilst learning about Reality Composer I found that it is possible to anchor an image using Reality Composer, meaning if I have an image in real life and a copy of it in the Reality Composer then using that I can build a whole scene right on top of the image. I was wondering, how does the actual anchoring happen?

I have worked before with SIFT keypoint matching, which could be used in this case as well, however, I cannot find how this works in Reality Composer.

1

There are 1 answers

0
Andy Jazz On

The principle of operation is as simple as that:

Reality Composer's scene element called AnchorEntity contained in .rcproject file in RealityKit app conforms to HasAnchoring protocol. When RealityKit app's Artificial Intelligence sees any image thru rear camera, it compares it with the one containing inside reference image folder. If both images are identical, app creates an image-based anchor AnchorEntity (similar to ARImageAnchor in ARKit) that tethers its corresponding 3D model. Invisible anchor appears in the center of a picture.

AnchorEntity(.image(group: "ARResourceGroup", name: "imageBasedAnchor"))

When you're using image-based anchors in RealityKit apps, you're using a RealityKit's analog of ARImageTrackingConfig that is less processor intensive than ARWorldTrackingConfig.

The difference between AnchorEntity(.image) and ARImageAnchor is that RealityKit automatically tracks all its anchors, while ARKit uses renderer(...) or session(...) methods for updating.