What is the Haskell / hmatrix equivalent of the MATLAB pos function?

685 views Asked by At

I'm translating some MATLAB code to Haskell using the hmatrix library. It's going well, but I'm stumbling on the pos function, because I don't know what it does or what it's Haskell equivalent will be.

The MATLAB code looks like this:

[U,S,V] = svd(Y,0);
diagS = diag(S);
...
A = U * diag(pos(diagS-tau)) * V';
E = sign(Y) .* pos( abs(Y) - lambda*tau );
M = D - A - E;

My Haskell translation so far:

(u,s,v) = svd y
diagS = diag s
a = u `multiply` (diagS - tau) `multiply` v

This actually type checks ok, but of course, I'm missing the "pos" call, and it throws the error:

inconsistent dimensions in matrix product (3,3) x (4,4)

So I'm guessing pos does something with matrix size? Googling "matlab pos function" didn't turn up anything useful, so any pointers are very much appreciated! (Obviously I don't know much MATLAB)

Incidentally this is for the TILT algorithm to recover low rank textures from a noisy, warped image. I'm very excited about it, even if the math is way beyond me!

Looks like the pos function is defined in a different MATLAB file:

function P = pos(A)
P = A .* double( A > 0 );

I can't quite decipher what this is doing. Assuming that boolean values cast to doubles where "True" == 1.0 and "False" == 0.0

In that case it turns negative values to zero and leaves positive numbers unchanged?

2

There are 2 answers

1
J. Abrahamson On BEST ANSWER

It looks as though pos finds the positive part of a matrix. You could implement this directly with mapMatrix

pos :: (Storable a, Num a) => Matrix a -> Matrix a
pos = mapMatrix go where
  go x | x > 0     = x
       | otherwise = 0

Though Matlab makes no distinction between Matrix and Vector unlike Haskell.

But it's worth analyzing that Matlab fragment more. Per http://www.mathworks.com/help/matlab/ref/svd.html the first line computes the "economy-sized" Singular Value Decomposition of Y, i.e. three matrices such that

U * S * V = Y

where, assuming Y is m x n then U is m x n, S is n x n and diagonal, and V is n x n. Further, both U and V should be orthonormal. In linear algebraic terms this separates the linear transformation Y into two "rotation" components and the central eigenvalue scaling component.

Since S is diagonal, we extract that diagonal as a vector using diag(S) and then subtract a term tau which must also be a vector. This might produce a diagonal containing negative values which cannot be properly interpreted as eigenvalues, so pos is there to trim out the negative eigenvalues, setting them to 0. We then use diag to convert the resulting vector back into a diagonal matrix and multiply the pieces back together to get A, a modified form of Y.

Note that we can skip some steps in Haskell as svd (and its "economy-sized" partner thinSVD) return vectors of eigenvalues instead of mostly 0'd diagonal matrices.

(u, s, v) = thinSVD y

-- note the trans here, that was the ' in Matlab
a = u `multiply` diag (fmap (max 0) s) `multiply` trans v

Above fmap maps max 0 over the Vector of eigenvalues s and then diag (from Numeric.Container) reinflates the Vector into a Matrix prior to the multiplys. With a little thought it's easy to see that max 0 is just pos applied to a single element.

1
Sean On

(A>0) returns the positions of elements of A which are larger than zero, so forexample, if you have

A = [ -1 2 -3 4
       5 6 -7 -8 ]

then B = (A > 0) returns

B = [ 0 1 0 1
      1 1 0 0]

Note that we have ones corresponding to an elemnt of A which is larger than zero, and 0 otherwise.

Now if you multiply this elementwise with A using the .* notation, then you are multipling each element of A that is larger than zero with 1, and with zero otherwise. That is, A .* B means

[ -1*0   2*1   -3*0   4*1
   5*1   6*1   -7*0  -8*0 ]

giving finally,

[ 0 2 0 4
  5 6 0 0 ]

So you need to write your own function that will return positive values intact, and negative values set to zero.

And also, u and v does not match in dimension, for a generall SVD decomposition, so you actually would need to REDIAGONALIZE pos(diagS - Tau), so that u* diagnonalized_(diagS -tau) agrres to v