I am using an implementation of the Ramer Douglas Peucker algorithm to reduce the amount of points that I have for a map route. For example, if I have more than 500 points, I want to run the algorithm with a tolerance that will reduce the point count to less than 500 while being as close to it as possible. What I have tried so far, which is incredibly inefficient is the following:
simp = coordSimplify(data.tableData, 0)
while (simp.length > 400) {
i += 0.0001;
simp = coordSimplify(data.tableData, i);
}
But I realise that this will slow the whole process down hugely.
How can I make this whole process more efficient? I was thinking some sort of binary chop algorithm, but then I'm not sure how I'd calculate the top and bottom bounds each time.
TIA
Suggest trying something along the following lines, which is effectively a binary search seeking the
epsilon
value (using the terminology at https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm) that falls within the target point length range ofsimpTargetLo
andsimpTargetHi
.(Note that I have not tested this, as I don't have access to
coordSimplify()
, so there might be a few syntax errors, but the logic should be sound.)Note that this method of narrowing to a solution depends on the proper choice of initial
epsilonLo
andepsilonHi
, in addition to assuming thatcoordSimplify()
is fairly continuous in nature...