I am having trouble classifying ground points for data our collaborators collected over some mountainous areas. It seems that where the canopy is thick, or under certain conditions, either no ground points can be found or they remain unidentified.
Is there a way to use outside ground points or a terrain model to help the ground classification algorithm identify the "true" ground within our data? We have access to pre-classified point clouds covering the area at a coarser point density. However, we're not sure if this data will exactly match up with our own, either because of differences in resolution, georeferencing, or because this data was taken at a different time. But is there a way to use this additional information to 1) remove points that are pretty clearly below ground and/or 2) fill in or identify 'holes' where no ground points are present in our data?
The first picture below is an example of results. You can also see that in some cases, what appears to be noise below ground is misclassified as ground (I have already run the noise classification algorithm ivf
). These holes and misclassified points really throw off normalization (second picture below).
Ground classification before normalization:
After normalization: Note that these cross sections aren't covering exactly the same area, but they are probably very close to each other.
One thought I had is to tag the subset of points in our data that is within a threshold distance from the ground and run the ground classification algorithm on that. Is this a feasible approach, or is there a more efficient/recommended way to go?