Python - Google Colab - IndexError: index 2 is out of bounds for axis 0 with size 1

65 views Asked by At

I am trying to solve the sparse matrix equation

u_derivative_1 = A * u 

(A is the sparse matrix)

But I'm getting the following error :-

  IndexError                                Traceback (most recent call last)
  <ipython-input-24-f4af80e4ae52> in <cell line: 1>()
  ----> 1 trial1 = discretise_delta_u_v4(1000, 'implicit')
  <ipython-input-23-731d13e4ddf7> in discretise_delta_u_v4(N, method)
       61         for i in range (1 , N-1):
       62           for j in range (1 , N-1):
  ---> 63             A[i,j] = (u[i-1,j] + u[i+1,j] + u[i,j-1] + u[i,j+1] - (4*u[i,j]))/(h**2)
       64 
  IndexError: index 2 is out of bounds for axis 0 with size 1

I am confused why I am getting this error and how to resolve this. This is my code -

import numpy as np
import scipy
import scipy.sparse
from scipy.sparse import csr_matrix
from scipy.sparse import coo_matrix

def discretise_delta_u_v4(N, method):

    i = np.arange(0,N)
    j = np.arange(0,N)
    h = 2/N

    A = csr_matrix((N, N), dtype = np.float64).toarray()
    u = np.array([[(i*h), (j*h)]])

    #u[ih,jh] = 
    u[:,0] = 5    #Boundary
    u[:,-1] = 0   #Boundary
    u[0,:] = 0    #Boundary
    u[-1,:] = 0   #Boundary

    #Implicit
    if (method=='implicit') :
        A[0,:] = 0
        A[-1,:] = 0
        A[:,0] = 0
        A[:,-1] = 0
        for i in range (1 , N-1):
          for j in range (1 , N-1):
            A[i,j] = (u[i-1,j] + u[i+1,j] + u[i,j-1] + u[i,j+1] - (4*u[i,j]))/(h**2)

    # u_der_1 = A * u
    for i in range (0 , N):
      for j in range (0 , N):
        u_der_1 = scipy.sparse.linalg.spsolve(A,u)


trial1 = discretise_delta_u_v4(1000, 'implicit')
1

There are 1 answers

3
ABizz On

The error you're seeing specifically is being caused by the way you're constructing the array u.

Doing u = np.array([[(i*h), (j*h)]]) results in u being:

[[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.], [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]]

which is an array of shape (1, 2, 10). Therefore, when you do u[i+1,j] the only valid value of i+1 is 0, everything else is out of bounds.

Given the problem you are trying to solve, there are a few more issues with the approach.

  1. You want A to be a sparse matrix, but you're creating it as a dense matrix by converting it to a dense array (i.e. A = csr_matrix((N, N), dtype = np.float64).toarray()). You should remove the call to .toarray().

  2. You're also initialising u incorrectly, you want a 1-dimensional array of N**2 elements, instead you're initialising a 2D array.

  3. You have u_der_1 = scipy.sparse.linalg.spsolve(A,u) inside a loop, which is also not correct. You need to build the matrix A first and then solve the system.

import numpy as np
import scipy.sparse
from scipy.sparse import lil_matrix

def discretise_delta_u_v4(N, method):
    
    # Initialise the matrix A as a list of lists first
    h = 2 / N
    A = lil_matrix((N ** 2, N ** 2), dtype=np.float64)

    if method == 'implicit':
        for i in range(N):
            for j in range(N):
                index = i * N + j
                if i == 0 or i == N - 1 or j == 0 or j == N - 1:
                    A[index, index] = 1
                else:
                    A[index, index] = -4 / (h ** 2)
                    A[index, index - 1] = 1 / (h ** 2)
                    A[index, index + 1] = 1 / (h ** 2)
                    A[index, index - N] = 1 / (h ** 2)
                    A[index, index + N] = 1 / (h ** 2)

    A = A.tocsr(). # You conver to a CSR matrix here

    u = np.zeros(N ** 2)
    u[0:N] = 5  # Per your boundary condition

    u_der_1 = scipy.sparse.linalg.spsolve(A, u)
    return u_der_1.reshape(N, N)

trial1 = discretise_delta_u_v4(10, 'implicit')