I understand why this error is occurring. I am using finite differences to find solutions with a numpy array of size n = 400 and the term in question produces a term i + 2 which for i in range(1, n - 1)
goes over to n + 1. I am, however, struggling to find a solution.
Code:
import numpy as np
import matplotlib.pyplot as plt
import copy
import sys
# Parameter Values
n = 400
r_max = 10
dr = r_max / n
r_min = dr
r = np.linspace(r_min, r_max, num=n)
#print(dr, r[1]-r[0])
# Imaginary Time Method
# Initial Variable Arrays
g0 = np.zeros(r.size)
fDerR = np.zeros(r.size)
fDerRR = np.zeros(r.size)
gDerR = np.zeros(r.size)
gDerRR = np.zeros(r.size)
k1 = np.zeros(r.size)
k2 = np.zeros(r.size)
k3 = np.zeros(r.size)
k4 = np.zeros(r.size)
j1 = np.zeros(r.size)
j2 = np.zeros(r.size)
j3 = np.zeros(r.size)
j4 = np.zeros(r.size)
TimeSteps = 5000
dt = 0.0005 #0.0005#0.004
deltaPlot = 1000 #1000#1600#200
# Winding Number
s = 3
# Interaction Strength
U_0 = 1
V_0 = 1
# Impurity Parameters
E = 1
N_I=1.
# Initial Conditions
# Pade' Approximation
a1=11./32.
a2=11./384.
b1=1./3.
b2=a2
f = np.sqrt((np.square(r)*(a1+a2*np.square(r)))/(1.+b1*np.square(r)+b2*np.power(r, 4)))
# Renormalised Initial Condition for Impurity so the mass is M
g = np.exp(-np.square(r)/(2.*np.square(0.5)))
g=np.sqrt(N_I/(2.*np.pi*np.trapz(np.multiply(np.square(g), r))))*g
# Preprare the plots
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))
label='tau=%f' % (0*dt)
ax1.plot(r, f, label='$\u03C4$=0')
ax1.set_xlabel('r', fontsize=16)
ax1.set_ylabel('f', fontsize=16)
ax1.legend()
#title = 's=%d, V_0=%f' %(s, V_0)
#ax1.set_title(title)
ax2.plot(r, g, label='$\u03C4$=0')
#massTitle='N_I=%f' % (N_I)
#ax2.set_title(massTitle)
ax2.set_xlabel('r', fontsize=16)
ax2.set_ylabel('g', fontsize=16)
#title = 'U_0=%f, E=%f' %(U_0, E)
#ax2.set_title(title)
#ax2.legend()
#plt.show()
# Method Algorithm
for l in range(1, TimeSteps+1):
# Compute k1
tempf = np.copy(f)
tempg = np.copy(g)
fDerR[0] = -0.5 * (tempf[2] - 0.) / (2 * dr) + 2 * (tempf[1] - 0.) / (2 * dr) # as it must be f(r=0)=0
gDerR[0] = -0.5 * (tempg[2] - 0. ) / (dr) + 2 * (tempg[1] - tempg[0]) / dr
for i in range(1, n-1):
fDerR[i] = 2 * (tempf[i+1] - tempf[i-1]) / (3 * dr) + tempf[i - 2] / 12 * dr - \
tempf[i + 2] / 12 * dr
gDerR[i] = 2 * (tempg[i+1] - tempg[i-1]) / (3 * dr) + tempg[i - 2] / 12 * dr - \
tempg[i + 2] / 12 * dr
fDerR[n-1] = 1.5 * tempf[n-1] / dr - 2 * tempf[n-2] / dr + 0.5 * tempf[n-3] / dr
gDerR[n-1] = 1.5 * tempg[n-1] / dr - 2 * tempg[n-2] / dr + 0.5 * tempg[n-3] / dr
fDerRR[0] = - fDerR[3] / dr + 4 * fDerR[2] / dr + (- 5 * fDerR[1] - 2 * fDerR[0]) / dr
gDerRR[0] = - gDerR[3] / dr + 4 * gDerR[2] / dr + (- 5 * gDerR[1] - 2 * gDerR[0]) / dr
for i in range(1, n-1):
fDerRR[i] = 4 * (fDerR[i+1] + fDerR[i-1]) / (3 * dr) - 2.5 * fDerR[i] / dr - \
+ fDerR[i-2] / 12 * dr - fDerR[i+2] / 12 * dr
gDerRR[i] = 4 * (gDerR[i+1] + gDerR[i-1]) / (3 * dr) - 2.5 * gDerR[i] / dr - \
gDerR[i - 2] / 12 * dr - gDerR[i+2] / 12 * dr
fDerRR[n-1] = (-5 * fDerR[n-1] + 4 * fDerR[n-2] - fDerR[n-3]) / dr
gDerRR[n-1] = (-5 * gDerR[n-1] + 4 * gDerR[n-2] - gDerR[n-3]) / dr
k1=0.5*(fDerRR+fDerR/r-np.square(s/r)*tempf)+(E-V_0*np.square(tempf)-U_0*np.square(tempg))*tempf
j1=0.5*(gDerRR+gDerR/r)-(U_0*np.square(tempf)-E)*tempg
I am choosing to use higher accuracy (2 for Forward and Backward, 4 for central) finite difference coefficients (https://en.wikipedia.org/wiki/Finite_difference_coefficient) because when I run this code for s = 3 there are overflow issues with the standard finite difference algorithm due to (most likely) the 1/r terms.
The error occurs, as expected, at
fDerR[i] = 2 * (tempf[i+1] - tempf[i-1]) / (3 * dr) + tempf[i - 2] / 12 * dr - \
tempf[i + 2] / 12 * dr.
My first solution was to use np.append() for the tempf[i + 2]
term however this produced the error ValueError: setting an array element with a sequence
on the same line. I may have implemented it incorrectly since I used:
np.append(tempf, tempf[i + 2] / 12 * dr).
My current thought is to define a new array of size 401, add the remaining 400 terms from the previous array and then add the (i + 2)th term, though I am unsure how to implement it.