Newton's Gradient Descent Linear Regression

1.8k views Asked by At

I am trying to implement a function in MatLab that calculates the optimum linear regression using Newton's method. However, I became stuck in one point. I don't know how to find the second derivative. So I cannot implement it. Here is my code.

Thanks for your help.

function [costs,thetas] = mod_gd_linear_reg(x,y,numofit)

    theta=zeros(1,2);
    o=ones(size(x));
    x=[x,o]';
    for i=1:numofit

        err=(x.'*theta.')-y;
        delta=(x * err) / length(y); %% first derivative
        delta2; %% second derivative

        theta = theta - (delta./delta2).';

        costs(i)=cost(x,y,theta);
        thetas(i,:)=theta;


    end

end
function totCost = cost(x,y,theta)

 totCost=sum(((x.'*theta.')-y).*((x.'*theta.')-y)) / 2*length(y);

end

Edit::

I solved this issue with some paper and pen. All you need is some calculus and matrix operations. I found the second derivative and it is working now. I am sharing my working code for those who are interested.

function [costs,thetas] = mod_gd_linear_reg(x,y,numofit)

theta=zeros(1,2);
sos=0;
for i=1:size(x)
    sos=sos+(x(i)^2);
end
sumx=sum(x);
o=ones(size(x));
x=[x,o]';
for i=1:numofit

    err=(x.'*theta.')-y;
    delta=(x * err) / length(y); %% first derivative
   delta2=2*[sos,1;1,sumx];  %% second derivative

    theta = theta - (delta.'*length(y)/delta2);

    costs(i)=cost(x,y,theta);
    thetas(i,:)=theta;


end

end

function totCost = cost(x,y,theta)

 totCost=sum(((x.'*theta.')-y).*((x.'*theta.')-y)) / 2*length(y);

end
1

There are 1 answers

0
Ray On

It is known that second derivative may be difficult to find.

This note page 6 might be helpful in some sense.

If you find full newton's method difficult, you can use some other functions like fminunc and fmincg.