I would like to calculate the singular value decomposition of a matrix and the order of the singular values is important. By default, it seems numpy.linalg.svd (and scipy.linalg.svd) sort the singular values, which makes it impossible for me to tell which column corresponds to each singular value.

Example:

import numpy as np

X = np.array([[-74, 80, 18, -56, -112],
              [14, -69, 21, 52, 104],
              [66, -72, -5, 764, 1528],
              [-12, 66, -30, 4096, 8192],
              [3, 8, -7, -13276, -26552],
              [4, -12, 4, 8421, 16842]])

U, D, V = np.linalg.svd(X)
print(D)

Returns:

array([3.63684045e+04, 1.70701331e+02, 6.05331879e+01, 7.60190176e+00,
        1.17158094e-12])

When I need:

array([1.70701331e+02, 6.05331879e+01, 7.60190176e+00, 3.63684045e+04, 
        1.17158094e-12])

Is there a way to get the singular values (D) such that they are unsorted? The relation X = UDV^T must also be preserved.

Edit: Some context was needed here to elucidate my misunderstanding. I was trying to reproduce Section 2.3, Variance Decomposition Method in this paper.

1 Answers

2
godot On Best Solutions

When you say:

By default, it seems numpy.linalg.svd (and scipy.linalg.svd) sort the singular values, which makes it impossible for me to tell which column corresponds to each singular value.

I think you're making a mistake, there are no unique order in the singular values in the "Singular value decomposition", all that matters is that order of column vectors of U, D & V are such that: U * D * V == X

That's why, by convention, they are put in descending order, but obviously the vertical vectors of the unitary basis U and the conjugate transpose V are also set in an order such that the formula above holds.

If you want a proof, to compute X back from U, D & V, you have to do:

from scipy import linalg

#decompose
U, D, V = np.linalg.svd(X)

# get dim of X
M,N = X.shape

# Construct sigma matrix in SVD (it simply adds null row vectors to match the dim of X 
Sig = linalg.diagsvd(D,M,N)

# Now you can get X back:
assert np.sum(np.dot(U, np.dot(Sig, V)) - X) < 0.00001