MATLAB weighted resampling

2.5k views Asked by At

I'm writing a particle filter localization algorithm as part of an exercise to locate a plane flying over mountains.

From my understanding, the steps to this are: - make a bunch of random guesses - filter out unlikely guesses (using Gaussian hypothesis testing and some known information about the problem) - shift filtered points by how much the plane moved in that step - resample, weighted by shifted points

What I'm having trouble with is the resampling bit - how could I perform a weighted resampling in MATLAB?

Please let me know if there's anything I should clarify!! Thanks!

1

There are 1 answers

1
Chris On BEST ANSWER

Firstly you should look into the SIR (Sequential Importance Sampling Re-sampling) Particle Filter [PF] (Or Sequential Monte-Carlo Methods is the other name it is known by). I recommend the book called by Arnaud Doucet & Neil Gordon called "Sequential Monte Carlo Methods in Practice". It contains practically the state of the art when it comes to Particle Filters and contains a description of the implementations of the various flavors of the PF.

The SIR-PF has the following steps:

  • Prediction: Based on your state equations and the previous particle population propagate the particles to the next discrete time instance i.e. x(t+1) = f(x(t),w(t)) := where x is a vector of n states and for each state you have N realisations (particles) of the state eg. x ~ [N x n]
  • Correction: based on your estimation of your measurement equations that should be in the form y(t+1) = g(x(t+1),v(t)), where x(t+1) is your state population of particles. You calculate the error, e(t) = y(t+1) - y_m(t+1) and weight the population according to a likelihood function, which can be, but not necessarily has to be, a Normal distribution. You now will have a set of weights e.g. if you have m "sensors" you will have a weighting matrix W = [N x m] or in the simple case you'll have a [N x 1] vector of weights. (remember to normalise the weights)
  • Re-sampling (Conditional): This step should be based on a conditional to avoid the pitfall of particle degeneracy (which you should look into), the common conditional is to compute the "effective particle population size", := 1/(sum of the squared weights) i.e. Neff = 1/sum(w1**2, w2**2, ...., wN**2). If Neff < 0.85*N then resample.
  • Re-sampling: Calculate the CDF of the (normalised) weights vector i.e. P = cumsum(W) and generate random samples from a uniform distribution (r), select the first particle that P(w) >= r, repeat this until you have N realisations of the CDF, this will sample more frequently from the particles that have higher weights and less frequently from those that do not, effectively condensing your particle population. You then create a new set of weights that are uniformly weighted i.e. wN = 1/N

    function [weights,X_update] = Standardised_Resample(P,X)
        Neff = 1/(sum(P.^2)); % Test effective particle size
        P = P./sum(P) % Ensure particle weights are normalised
        if Neff < 0.85*size(P,1)
            N = size(P,1)
            X_update(N,1) = 0
            L = cumsum(P)
            for i = 1:N
                X_update(i) = X(find(rand <= L,1))
            end
            weights = ones(N,1)*1./N;
        else
            weights = P;
            X_update = X;
        end
    end
    
  • Estimation: XEst = W(t+1)*x(t+1) := the weighted product produces the estimate for the states at time t+1

Rinse and Repeat for time t+2 etc.

Note: x(0/0) is a population of N samples of a random distribution of ~N(x(0),Q(0)) where x(0) is an estimate of the initial conditions [IC] and Q(0/0) is an estimate of the variance (uncertainty) of your IC guess