I am trying to find a vector that minimizes the residual sum of squares when multiplying a matrix.
I know of scipy's optimize package (which has a minimize function). However, there is an extra constraint for my code. The sum of all entries of w (see function below) must equal 1, and no entry of w can be less than 0. Is there a package that does this for me? If not, how can I do this?
Trying to minimize w:
def w_rss(w,x0,x1):
predictions = np.dot(x0,w)
errors = x1 - predictions
rss = np.dot(errors.transpose(),errors).item(0)
return rss
X0 = np.array([[3,4,5,3],
[1,2,2,4],
[6,5,3,7],
[1,0,5,2]])
X1 = np.array([[4],
[2],
[4],
[2]])
W = np.array([[.0],
[.5],
[.5],
[.0]])
print w_rss(W,X0,X1)
So far this is my best attempt at looping through possible values of w, but it's not working properly.
def get_w(x0,x1):
J = x0.shape[1]
W0 = np.matrix([[1.0/J]*J]).transpose()
rss0 = w_rss(W0,x0,x1)
loop = range(J)
for i in loop:
W1 = W0
rss1 = rss0
while rss0 == rss1:
den = len(loop)-1
W1[i][0] += 0.01
for j in loop:
if i == j:
continue
W1[j][0] -= 0.01/den
if W1[j][0] <= 0:
loop.remove(j)
rss1 = w_rss(W1,x0,x1)
if rss1 < rss0:
#print W1
W0 = W1
rss0 = rss1
print '--'
print rss0
print W0
return W0,rss0
The SLSQP code in scipy can do this. You can use
scipy.optimize.minimize
withmethod='SLSQP
, or you can use the functionfmin_slsqp
directly. In the following, I usefmin_slsqp
.The scipy solvers generally pass a one-dimensional array to the objective function, so to be consistent, I'll change
W
andX1
to be 1-d arrays, and I'll write the objective function (now calledw_rss1
) to expect a 1-d argumentw
.The condition that all the elements in
w
must be between 0 and 1 is specified using thebounds
argument, and the condition that the sum must be 1 is specified using thef_eqcons
argument. The constraint function returnsnp.sum(w) - 1
, so it is 0 when the sum of the elements is 1.Here's the code:
When I run this, the output is