so I am trying to learn how SVD work to use it in PCA (principle component analysis), but the problem is that it seam I get wrong results, I tryied using np.linalg.svd and this is my code :
A = np.array([[2, 2],
[1, 1]])
u, s, v = np.linalg.svd(A, full_matrices=False)
print(u)
print(s)
print(v)
and this is the result I got :
[[-0.89442719 -0.4472136 ]
[-0.4472136 0.89442719]]
[3.16227766e+00 1.10062118e-17]
[[-0.70710678 -0.70710678]
[ 0.70710678 -0.70710678]]
and I tried to get SVD decomposition on WolframAlpha and I got these results:
the magnitude of values seems correct but the sign is not correct, even I followed up with a video for a professor on MIT OpenCourseWare on youtube and he give these results :
which is the same magnitude answer but different signs, so what could possibly had gone wrong?


Different conventions
It is a matter of a different convention for returning the matrix
v:From the documentation of
numpy.linalg.svd(emphasis mine):So to summarize: given the SVD decomposition of
x,x = u @ np.diag(s) @ vhthe matrices returned bynumpy.linalg.svd(x)areu,sandvhwherevhis the hermitian conjugate ofv. Other libraries and software will instead returnv, causing the apparent inconsistency.It is a shame that different libraries have different conventions, this also really tripped me up the first time I had to deal with this.
Intrinsic mathematical ambiguity on
uandvFurthermore the mathematics of the problem means that the matrices
uandvare not uniquely determined.In order to check that the SVD is correct you need to check that the matrices
uandvare indeed unitary and thatx = u @ np.diag(s) @ vh. If both conditions hold, than the SVD is correct, otherwise it isn't.Testing the
numpylibraryHere is some simple code to check that the implementation of the SVD library in numpy is indeed correct (of course it is, consider this a pedagogical exercise):