Algorithm for reducing 1D function to polyline with small number of points

386 views Asked by At

Problem

My input is 1D function y = f(x) and I want to find a way to approximate this function with a polyline with small number of points on some given interval <x1, x2>:

enter image description here

What have I tried:

I solved this by making polyline with many points (1000+) and then decimating this polyline using Ramer–Douglas–Peucker algorithm.

enter image description here

So in (pseudo)python it would be something like:

def fn_to_low_poly(f, x1, x2):
    xs = numpy.linspace(x1, x2, 1000)
    ys = f(xs)
    return ramer_douglas_peucker_decimate(xs, ys, epsilon=0.001)

This works, but its too complicated and slow and I'm pretty sure there must be a better way.

I'm doing this in python/numpy/scipy now, but I may need to rewrite it in C or Rust or something else, so I don't care too much about specific way to solve this in python, I'd rather have a generic algorithm, although I'll be grateful for any help.

Question:

Is there an algorithm for directly reducing 1d function (float -> float) to polyline with low number of points?

0

There are 0 answers