Why is my gaussian elimination algorithm failing?

170 views Asked by At

In gaussian elimination, we take our matrix A and another matrix I.

We proceed to convert the matrix A into an identity matrix. Once that's done, we apply the same steps that we did on A on the identity matrix. Then A becomes the identity matrix and I the inverse.

However, in my code A becomes identity, but the first few rows of the inverse are incorrect

I have spent more than a couple of hours debugging. What I realized is, that my method works as long as my matrix is relatively sparse. For instance, this works -

A = [[10., 3, 10.], [0., 9., 0.], [10.,12.,1.]]

And yes, my matrix A always becomes the identity matrix in the end. The problem is that I doesn't become A^{-1}

A quick explanation about my approach. I first worked on converting my matrix into an upper triangular matrix. Then I worked on converting it into an identity matrix. The code is written with Python 2.7

Here's my code -


from fractions import Fraction

A = [[10., 3, 10.], [0., 9., 0.], [10.,12.,1.]]
for i in range(len(A)):
    for j in range(len(A)):
        A[i][j] = Fraction(A[i][j])
row = len(A)

print "row = ", row

I = [[0 for i in range(len(A))] for j in range(len(A))]
for i in range(len(I)):
    for j in range(len(I)):
        if i == j:
            I[i][j] = 1
        I[i][j] = Fraction(I[i][j])

print I
foo = {}
for i in range(len(A)):
    for j in range(len(A)):
        if i==j:
            foo[i] = A[i][j]



print "A before = ", A
print "I before = ", I

i = 0
for k in range(len(A)):
    normalize = A[k][k]
    if normalize != Fraction(0,1):
        for l in range(len(A)):
            A[k][l] = A[k][l]/normalize
        for l in range(len(I)):
            I[k][l] = I[k][l]/normalize
    i = k+1
    while i<len(A):
        coeff = A[i][k]
        for j in range(len(A)):

            A[i][j] = A[k][j]*coeff - A[i][j]

            I[i][j] = I[k][j]*coeff - I[i][j]
        i = i + 1
print "A intermediate = ", A
print "I intermediate = ", I


for i in range(len(A)):
    for j in range(len(A)):
        k = i
        while k+1<len(A):
            coeff = A[i][k+1]

            A[i][j] = A[i][j] - A[k+1][j]*coeff
            print "A[i][j] final =  ", A[i][j]

            I[i][j] = I[i][j] - I[k+1][j]*coeff
            k = k+1

print "A final = ", A
print "I final = ", I

Here's the output -

A final =  [[Fraction(1, 1), Fraction(0, 1), Fraction(0, 1)], [Fraction(0, 1), Fraction(1, 1), Fraction(0, 1)], [Fraction(0, 1), Fraction(0, 1), Fraction(1, 1)]]
I final =  [[Fraction(-1, 90), Fraction(-13, 90), Fraction(1, 9)], [Fraction(0, 1), Fraction(1, 9), Fraction(0, 1)], [Fraction(1, 9), Fraction(1, 9), Fraction(-1, 9)]]

Edit 1 - I need to solve this using only standard Python libraries.

Edit 2 - For instance, if I give this input -

A = [[5,10,20], [2, 8, 12], [4, 8, 8]]

I get the following output which is wrong -

A final =  [[Fraction(1, 1), Fraction(0, 1), Fraction(0, 1)], [Fraction(0, 1), Fraction(1, 1), Fraction(0, 1)], [Fraction(0, 1), Fraction(0, 1), Fraction(1, 1)]]
I final =  [[Fraction(0, 1), Fraction(-1, 2), Fraction(1, 2)], [Fraction(-1, 5), Fraction(1, 4), Fraction(1, 8)], [Fraction(1, 10), Fraction(0, 1), Fraction(-1, 8)]]

Here's the right answer -

enter image description here

1

There are 1 answers

1
GabeDoubleU On

Hey I rewrote the code so the process taken by the computer can be highlighted better, and I think the issue may lie in adding rows in the identity matrix before the row multiplication. Here's the written code:

from fractions import Fraction

def p_frac(x):
    if x.denominator == 0: return "NaN"
    if x.numerator == 0: return "0"
    if x.denominator == 1: return str(x.numerator)
    return str(x.numerator)+"/"+str(x.denominator)

A = [[10., 3, 10.], [0., 9., 0.], [10.,12.,1.]]
for i in range(len(A)):
    for j in range(len(A)):
        A[i][j] = Fraction(A[i][j])
row = len(A)

print ("row = ", row)

I = [[0 for i in range(len(A))] for j in range(len(A))]
for i in range(len(I)):
    for j in range(len(I)):
        if i == j:
            I[i][j] = 1
        I[i][j] = Fraction(I[i][j])

print (I)
foo = {}
for i in range(len(A)):
    for j in range(len(A)):
        if i==j:
            foo[i] = A[i][j]

print ("A before = ")
print('\n'.join(['\t'.join([p_frac(cell) for cell in row]) for row in A]))
print ("I before = ")
print('\n'.join(['\t'.join([p_frac(cell) for cell in row]) for row in I]))
i = 0
for k in range(len(A)):
    normalize = A[k][k]
    if normalize != Fraction(0,1):
        for l in range(len(A)):
            A[k][l] = A[k][l]/normalize
        for l in range(len(I)):
            I[k][l] = I[k][l]/normalize
    i = k+1
    while i<len(A):
        coeff = A[i][k]
        for j in range(len(A)):

            A[i][j] = A[k][j]*coeff - A[i][j]

            I[i][j] = I[k][j]*coeff - I[i][j]
        i = i + 1
    print("A step", str(k+1))
    print('\n'.join(['\t'.join([p_frac(cell) for cell in row]) for row in A]))
    print("I step", str(k + 1))
    print('\n'.join(['\t'.join([p_frac(cell) for cell in row]) for row in I]))

for i in range(len(A)):
    for j in range(len(A)):
        k = i
        while k+1<len(A):
            coeff = A[i][k+1]

            A[i][j] = A[i][j] - A[k+1][j]*coeff
            # print ("A[i][j] final =  ", A[i][j])

            I[i][j] = I[i][j] - I[k+1][j]*coeff
            k = k+1

print ("A final = ")
print('\n'.join(['\t'.join([p_frac(cell) for cell in row]) for row in A]))
print ("I final = ")
print('\n'.join(['\t'.join([p_frac(cell) for cell in row]) for row in I]))

And here's where I noticed a discrepancy in the algorithm:

You have this after k=0

A step 1
1       3/10    1
0       -9      0
0       -9      9
I step 1
1/10    0       0
0       -1      0
1       0       -1

But after k=1, you have

A step 2
1       3/10    1
0       1       0
0       0       -9
I step 2
1/10    0       0
0       1/9     0
-1      -1      1

If I'm not mistaken, the entry in the third row and second column of I step 2 should be +1, right?