How can I define reconstruction validation in masked point cloud neural networks?

37 views Asked by At

I have been using point cloud networks to mask portions of point clouds then reconstruct the missing points. I have noticed though, that these point cloud networks do not define pure accuracy with respect to comparing the ground truth to a reconstructed point cloud in a validation dataset. Ie I want to simply measure how well the model reconstructs on some previously unseen dataset after pretraining. The models I've run across tend to use "testing" or "validation" datasets for downstream tasks such as labeling or segmentation, but I have yet to see accuracy or validation loss purely for reconstruction.

Take this point cloud network for example (who's paper is here). The authors use the pertained checkpoints for downstream tasks, but do not provide a method to use the pretrained checkpoints to validate pure reconstruction on a previously unseen dataset. I've noticed the same trend with other point cloud networks.

I am interested in looking at how well I can train the above point cloud model to reconstruct masked areas of point clouds, training then validation. Ie I am more interested in a loss curve obtained from applying a pretrained model (from the above) to some validation dataset to see how well it performs. I am less interested in downstream tasks. However, the paper and the code do not provide a method to do this. I am somewhat new to point cloud transformers, so my efforts at editing the code at the above repository to achieve my goals has not been fruitful. How would I go about doing this?

0

There are 0 answers