I'm translating some MATLAB code to Haskell using the hmatrix library. It's going well, but I'm stumbling on the pos function, because I don't know what it does or what it's Haskell equivalent will be.
The MATLAB code looks like this:
[U,S,V] = svd(Y,0);
diagS = diag(S);
...
A = U * diag(pos(diagS-tau)) * V';
E = sign(Y) .* pos( abs(Y) - lambda*tau );
M = D - A - E;
My Haskell translation so far:
(u,s,v) = svd y
diagS = diag s
a = u `multiply` (diagS - tau) `multiply` v
This actually type checks ok, but of course, I'm missing the "pos" call, and it throws the error:
inconsistent dimensions in matrix product (3,3) x (4,4)
So I'm guessing pos does something with matrix size? Googling "matlab pos function" didn't turn up anything useful, so any pointers are very much appreciated! (Obviously I don't know much MATLAB)
Incidentally this is for the TILT algorithm to recover low rank textures from a noisy, warped image. I'm very excited about it, even if the math is way beyond me!
Looks like the pos function is defined in a different MATLAB file:
function P = pos(A)
P = A .* double( A > 0 );
I can't quite decipher what this is doing. Assuming that boolean values cast to doubles where "True" == 1.0 and "False" == 0.0
In that case it turns negative values to zero and leaves positive numbers unchanged?
It looks as though
pos
finds the positive part of a matrix. You could implement this directly withmapMatrix
Though Matlab makes no distinction between
Matrix
andVector
unlike Haskell.But it's worth analyzing that Matlab fragment more. Per http://www.mathworks.com/help/matlab/ref/svd.html the first line computes the "economy-sized" Singular Value Decomposition of
Y
, i.e. three matrices such thatwhere, assuming
Y
ism x n
thenU
ism x n
,S
isn x n
and diagonal, andV
isn x n
. Further, bothU
andV
should be orthonormal. In linear algebraic terms this separates the linear transformationY
into two "rotation" components and the central eigenvalue scaling component.Since
S
is diagonal, we extract that diagonal as a vector usingdiag(S)
and then subtract a termtau
which must also be a vector. This might produce a diagonal containing negative values which cannot be properly interpreted as eigenvalues, sopos
is there to trim out the negative eigenvalues, setting them to 0. We then usediag
to convert the resulting vector back into a diagonal matrix and multiply the pieces back together to getA
, a modified form ofY
.Note that we can skip some steps in Haskell as
svd
(and its "economy-sized" partnerthinSVD
) return vectors of eigenvalues instead of mostly 0'd diagonal matrices.Above
fmap
mapsmax 0
over theVector
of eigenvaluess
and thendiag
(fromNumeric.Container
) reinflates theVector
into aMatrix
prior to themultiply
s. With a little thought it's easy to see thatmax 0
is justpos
applied to a single element.