Efficient Rational Resampling with lazy semantics

241 views Asked by At

To change the sampling rate of a signal, one needs to upsample , filter, then downsample. Doing this naively means inserting zeros into the input signal, correlating with a filter's impulse response, then discarding all but every nth sample of the convolution.

The problem with the naive approach is that there is a lot of useless computation. When convolving with the filter, most of the filter taps are multiplied by zero, and computing the value of samples that will be discarded in the downsampling phase is useless. That's why efficient rational resampling uses polyphase filter banks, where only the computations that are needed are performed.

I wonder if it would be possible to use lazy computation to avoid the useless multiplications, while also avoiding explicitely constructing the polyphase filter banks. My ideal solution would be something that resembled the naive approach (upsample, then correlate, then downsample), but did the same computations as the explicit polyphase filter approach.

The downsampling is easy, since values that aren't needed won't be calculated. But I can't figure out how to avoid the multiplications-by-zero in the correlation part. The best I've come up with is to use the Maybe type and upsample with Nothings (instead of zeros):

upsample n xs = upsample2' n xs 0
    where upsample' _ [] _ = []
          upsample' _ (x:_) 0 = Just x : upsample' n xs n
          upsample' n xs counter = Nothing : upsample' n xs (counter - 1)

correlate xs ys = sum $ catMaybes $ zipWith (fmap . (*)) xs ys

firFilter taps signal = map (correlate taps) (tails signal)

downsample _ [] = []
downsample n (x:xs) = x : downsample n (drop (n-1) xs)

upfirdn up down taps = (downsample down).(fir_filter taps).(upsample up)

The upfirdn function indeed is just the straightforward approach, and laziness in the downsampling avoids computation, but I think the processor still needs to check if values are Nothing in the correlation step.

Is there a way to use laziness to get the same computational savings as the polyphase filter approach? If not, is there a fundamental reason it can't be done?

2

There are 2 answers

2
Rufflewind On

I don't think laziness is helpful for this kind of problem for two reasons:

  • In Haskell, laziness is achieved by building unevaluated thunks in memory. This means laziness is not completely free: you still incur the cost of creating the thunk. This cost can be negligible if the evaluation of the thunk is expensive.

    However, in your case, for every thunk you are saving yourself a multiplication and an addition, which is only a few CPU instructions. The cost of creating a thunk is probably of the same order of magnitude.

  • Laziness is helpful when you don't know a priori which elements will be used -- often because the choice depends on the input/environment in some complicated or unknown way, so you would rather defer the decision until later.

    In your case, you know exactly which elements will be used: the elements must have indices divisible by n. Therefore, it's going to be more efficient to just iterate through [0, n, 2 * n, 3 * n, ...].


A naive way to add laziness would be to define a lazy multiply-add operation:

(+*) :: Num a => a -> (a, a) -> a
z +* (_, 0) = z
z +* (x, y) = z + x * y

The operation is biased so that if y is zero the calculation is skipped.

Now, when generating the mask via upsample, there is no need to use Maybe: just yield zero instead of Nothing. Then, to calculate the sum, simply use:

correlate xs ys = foldl' (+*) 0 (zip xs ys)
0
hotpaw2 On

One does not need to upsample and downsample to resample.

If efficiency and performance are not important, you can resample by simple interpolation at each sample point in the new array of equally spaced samples, and just recompute the needed phases or values of a (low-pass/anti-alias) interpolation polynomial at every new interpolation point (instead of precomputing and caching in a poly-phase table).

This also allows "lazy resampling" by only computing the new samples as needed.

There is a "quick-and-dirty" example of how to do this, using a computed von-Hann-Windowed-Sinc interpolation kernel, in Basic, on my DSP blog here: http://www.nicholson.com/rhn/dsp.html#3
Since this is a just array function computation at each new sample point, it should not be too difficult to convert this procedural Basic into functional Haskell.