I'm currently trying to implement the Householder based QR decomposition for rectangular matrices as described in http://eprints.ma.man.ac.uk/1192/1/qrupdating_12nov08.pdf (pages 3, 4, 5).
Apparently I got some of the pseudocode wrong though, since (1) my results differ from numpy.qr.linalg()
and (2) the matrix R
produced by my routines is not an upper triangular matrix.
My code (also available under https://pyfiddle.io/fiddle/afcc2e0e-0857-4cb2-adb5-06ff9b80c9d3/?i=true)
import math
import argparse
import numpy as np
from typing import Union
def householder(alpha: float, x: np.ndarray) -> Union[np.ndarray, int]:
"""
Computes Householder vector for alpha and x.
:param alpha:
:param x:
:return:
"""
s = math.pow(np.linalg.norm(x, ord=2), 2)
v = x
if s == 0:
tau = 0
else:
t = math.sqrt(alpha * alpha + s)
v_one = alpha - t if alpha <= 0 else -s / (alpha + t)
tau = 2 * v_one * v_one / (s + v_one * v_one)
v /= v_one
return v, tau
def qr_decomposition(A: np.ndarray, m: int, n: int) -> Union[np.ndarray, np.ndarray]:
"""
Applies Householder-based QR decomposition on specified matrix A.
:param A:
:param m:
:param n:
:return:
"""
H = []
R = A
Q = A
I = np.eye(m, m)
for j in range(0, n):
# Apply Householder transformation.
x = A[j + 1:m, j]
v_householder, tau = householder(np.linalg.norm(x), x)
v = np.zeros((1, m))
v[0, j] = 1
v[0, j + 1:m] = v_householder
res = I - tau * v * np.transpose(v)
R = np.matmul(res, R)
H.append(res)
return Q, R
m = 10
n = 8
A = np.random.rand(m, n)
q, r = np.linalg.qr(A)
Q, R = qr_decomposition(A, m, n)
print("*****")
print(Q)
print(q)
print("-----")
print(R)
print(r)
So I'm unclear about how to introduce zeroes to my R
matrix/about which part of my code is incorrect. I'd be happy about any pointers!
Thanks a lot for your time.
There were a bunch of problems/missing details in the notes you linked to. After consulting a few other sources (including this very useful textbook), I was able to come up with a working implementation of something similar.
The working algorithm
Heres the code for a working version of
qr_decomposition
:Output:
This version of
qr_decomposition
near exactly reproduces the output ofnp.linalg.qr
. The differences are commented on below.Numerical precision of the output
The values in the outputs of
np.linalg.qr
andqr_decomposition
match to high precision. However, the combination of computations thatqr_decomposition
uses to produce the zeros inR
don't exactly cancel, so the zeros aren't actually quite equal to zero.It turns out that
np.linalg.qr
isn't doing any fancy floating point tricks to ensure that the zeros in its output are0.0
. It just callsnp.triu
, which forcibly sets those values to0.0
. So to achieve the same results, just change thereturn
line inqr_decomposition
to:Signs (
+
/-
) in the outputSome of the
+/-
signs in Q and R are different in the outputs ofnp.linalg.qr
andqr_decomposition
, but this isn't really an issue as there are many valid choices for the signs (see this discussion of the uniqueness of Q and R). You can exactly match the sign convention thatnp.linalg.qr
by using an alternative algorithm to generatev
andtau
:Exactly matching the output of
np.linalg.qr
Putting it all together, this version of
qr_decomposition
will exactly match the output ofnp.linalg.qr
: Output:
Aside from the inevitable rounding error in the trailing digits, the outputs now match.