My background is Mechanical Engineering and started my study on Augmented Reality to be used for the assembly process. I have an RGB-D RealSense Lidar camera, HoloLense2 as tool. First, I used a Linemod method to generate the datasets, the model given to me by a friend, and I took the pictures using the R515, got the pose using the Linemod method to get the point cloud, then used the MeshLab for ICP step, and then got the rotation and translation of two pictures using the CloudCompare then transfer the matrixes to run the code again and so on until get the final pose.

Now, I want to improve the pose results, but I don't understand what should be changed in terms of filter, methods that have been used for segmentation and other relevant algorithms. I have read many papers, speaking about a certain algorithm used but I have no idea where the changes should be? looked for YouTube, but most of the videos are general and end-to-end learning. Any advice, I need to understand the main pipeline on how this method was created or when I need to change from where should start. My test bench is a plastic cylinder with some edges.

0

There are 0 answers