Matlab: Incorrect estimates returned by fminsearch

873 views Asked by At

I am trying to maximize a log-likelihood function of an autoregressive process (AR(1))

ERS(t) = 1+ 0.3*ERS(t-1) + epsilon(t) in order to estimate the parameters. Maximizing the log -likelihood involves finding the derivatives and so I thought of using fminsearch. I have changed the sign of the log likelihood in the implementation. The actual log-likelihood is presented below.

Log likelihood where the set of parameters

theta = {rho1 = c = 1, rho2 = 0.3 and rho3 = variance of the noise sigma2_epsilon}

fminsearch returns

0.808518411146547
1.00012215100964
1.13507247075777

which are incorrect. On the other hand, I checked with Least Square and the Least square estimates are close to the true parameters. In the implementation I have ignored the constant terms of the log-likelihood. Can somebody please help in rectifying the code and where I am going wrong? Thank you. The code

clc
clear all
global ERS
var_eps = 1;
epsilon = sqrt(var_eps)*randn(5000,1); % Gaussian signal exciting the AR model
theta0 = ones(3,1);
ERS(1) = 0.0;
for t= 2:5000
ERS(t)= 1+ 0.3*ERS(t-1) + epsilon(t-1); %AR(1) model
end
[theta,opt] = fminsearch(@(theta) ll_AR1(theta,ERS),theta0);

function L2 = ll_AR1(theta,Y)
rho0 = theta(1);
rho1 = theta(2);
sigma2_epsilon = theta(3);
T= size(Y,1);
%changed sign
L1 = 0.5*(sigma2_epsilon)^(-1)*(Y(2:end) - rho0 - rho1*(Y(1:end-1))).^2;
L2 = 0.5*log(sigma2_epsilon/(1-rho1.^2)) +0.5*(sigma2_epsilon)^(-1)*(1-rho1.^2)*(Y(1)- (rho0/(1-rho1))).^2 + 0.5*(T-1)*log(sigma2_epsilon) + sum(L1) ; % the log-likelihood function
end
1

There are 1 answers

4
Wouter Kuijsters On BEST ANSWER

Your problem does not seem to be convex. This means fminsearch will find a local minimum rather than a global minimum, and thus your result will be dependent on your initial estimation. You can check this by altering your theta0 value to for example

theta0 = 0.99*ones(3,1);

Which yields completely different results.

One way of working around this problem is using a Monte Carlo method, which basically means you run the simulation over and over again for different initial conditions and select the result that yields the largest function value (along with its optimization parameter set theta). The more simulations you run, the more likely you are to find the 'true' optimum, although this is not a guaranteed outcome.