I'm reading Matrix decompositions and latent semantic indexing (Online edition © 2009 Cambridge UP)
I'm trying to understand how you reduce the number of dimensions in a matrix. There's an example on page 13 which I'm trying to replicate using Python's numpy.
Let's call the original occurrence matrix "a" and the three SVD (Singular Value Decomposition) decomposed matrices "U", "S" and "V".
The trouble I'm having is that after I zero out the smaller singular values in "S", when I multiply together "U", "S" and "V" using numpy, the answer is not as it is given in the pdf. The bottom 3 rows are not all zeros. The funny thing is that when I just multiply "S" and "V" I get the right answer.
This is sort of surprising but multiplying "S" and "V" is actually what Manning and Schutze's book Foundations of Statistical Natural Language Processing says you have to do. But this is not what the pdf says you have to do in page 10.
So what's going on here?
Multiplying
S
andV
is exactly what you have to do to perform dimensionality reduction with SVD/LSA.This gives a matrix where all but the last few rows are zeros, so they can be removed, and in practice this is the matrix you would use in applications:
What the PDF describes on page 10 is the recipe to get a low-rank reconstruction of the input
C
. Rank != dimensionality, and the shear size and density of the reconstruction matrix make it impractical to use in LSA; its purpose is mostly mathematical. One thing you can do with it is check how good the reconstruction is for various values ofk
:Sanity check against scikit-learn's
TruncatedSVD
(full disclosure: I wrote that):