I'm translating some MATLAB code to Haskell using the hmatrix library. It's going well, but I'm stumbling on the pos function, because I don't know what it does or what it's Haskell equivalent will be.
The MATLAB code looks like this:
[U,S,V] = svd(Y,0);
diagS = diag(S);
...
A = U * diag(pos(diagS-tau)) * V';
E = sign(Y) .* pos( abs(Y) - lambda*tau );
M = D - A - E;
My Haskell translation so far:
(u,s,v) = svd y
diagS = diag s
a = u `multiply` (diagS - tau) `multiply` v
This actually type checks ok, but of course, I'm missing the "pos" call, and it throws the error:
inconsistent dimensions in matrix product (3,3) x (4,4)
So I'm guessing pos does something with matrix size? Googling "matlab pos function" didn't turn up anything useful, so any pointers are very much appreciated! (Obviously I don't know much MATLAB)
Incidentally this is for the TILT algorithm to recover low rank textures from a noisy, warped image. I'm very excited about it, even if the math is way beyond me!
Looks like the pos function is defined in a different MATLAB file:
function P = pos(A)
P = A .* double( A > 0 );
I can't quite decipher what this is doing. Assuming that boolean values cast to doubles where "True" == 1.0 and "False" == 0.0
In that case it turns negative values to zero and leaves positive numbers unchanged?
It looks as though
posfinds the positive part of a matrix. You could implement this directly withmapMatrixThough Matlab makes no distinction between
MatrixandVectorunlike Haskell.But it's worth analyzing that Matlab fragment more. Per http://www.mathworks.com/help/matlab/ref/svd.html the first line computes the "economy-sized" Singular Value Decomposition of
Y, i.e. three matrices such thatwhere, assuming
Yism x nthenUism x n,Sisn x nand diagonal, andVisn x n. Further, bothUandVshould be orthonormal. In linear algebraic terms this separates the linear transformationYinto two "rotation" components and the central eigenvalue scaling component.Since
Sis diagonal, we extract that diagonal as a vector usingdiag(S)and then subtract a termtauwhich must also be a vector. This might produce a diagonal containing negative values which cannot be properly interpreted as eigenvalues, soposis there to trim out the negative eigenvalues, setting them to 0. We then usediagto convert the resulting vector back into a diagonal matrix and multiply the pieces back together to getA, a modified form ofY.Note that we can skip some steps in Haskell as
svd(and its "economy-sized" partnerthinSVD) return vectors of eigenvalues instead of mostly 0'd diagonal matrices.Above
fmapmapsmax 0over theVectorof eigenvaluessand thendiag(fromNumeric.Container) reinflates theVectorinto aMatrixprior to themultiplys. With a little thought it's easy to see thatmax 0is justposapplied to a single element.