Python - faster alternative to 'for' loops

1.6k views Asked by At

I am trying to construct a binomial lattice model in Python. The idea is that there are multiple binomial lattices and based on the value in particular lattice, a series of operations are performed in other lattices. These operations are similar to 'option pricing model' ( Reference to Black Scholes models) in a way that calculations start at the last column of the lattice and those are iterated to previous column one step at a time. For example, If I have a binomial lattice with n columns, 1. I calculate the values in nth column for a single or multiple lattices. 2. Based on these values, I update the values in (n-1)th column in same or other binomial lattices 3. This process continues until I reach the first column.

So in short, I cannot process the calculations for all of the lattice simultaneously as value in each column depends on the values in next column and so on.

From coding perspective, I have written a function that does the calculations for a particular column in a lattice and outputs the numbers that are used as input for next column in the process.

def column_calc(StockPrices_col, ConvertProb_col, y_col, ContinuationValue_col, ConversionValue_col, coupon_dates_index, convert_dates_index ,
                call_dates_index, put_dates_index, ConvertProb_col_new, ContinuationValue_col_new, y_col_new,tau, r, cs, dt,call_trigger,
                putPrice,callPrice):

for k in range(1, n+1-tau):

     ConvertProb_col_new[n-k] = 0.5*(ConvertProb_col[n-1-k] + ConvertProb_col[n-k])

     y_col_new[n-k] =  ConvertProb_col_new[n-k]*r + (1- ConvertProb_col_new[n-k]) *(r + cs) 

     # Calculate the holding value
     ContinuationValue_col_new[n-k] = 0.5*(ContinuationValue_col[n-1-k]/(1+y_col[n-1-k]*dt) +  ContinuationValue_col[n-k]/(1+y_col[n-k]*dt))

     # Coupon payment date
     if np.isin(n-1-tau, coupon_dates_index) == True:

         ContinuationValue_col_new[n-k] = ContinuationValue_col_new[n-k] + Principal*(1/2*c);

     # check put/call schedule
     callflag = (np.isin(n-1-tau, call_dates_index)) & (StockPrices_col[n-k] >= call_trigger)
     putflag = np.isin(n-1-tau, put_dates_index)
     convertflag = np.isin(n-1-tau, convert_dates_index)

     # if t is in call date
     if (np.isin(n-1-tau, call_dates_index) == True) & (StockPrices_col[n-k] >= call_trigger):

         node_val = max([putPrice * putflag, ConversionValue_col[n-k] * convertflag, min(callPrice, ContinuationValue_col_new[n-k])] )
     # if t is not call date    
     else:

         node_val = max([putPrice * putflag, ConversionValue_col[n-k] * convertflag, ContinuationValue_col_new[n-k]] )


     # 1. if Conversion happens
     if node_val == ConversionValue_col[n-k]*convertflag:
         ContinuationValue_col_new[n-k] = node_val
         ConvertProb_col_new[n-k] = 1

     # 2. if put happens
     elif node_val == putPrice*putflag:
         ContinuationValue_col_new[n-k] = node_val
         ConvertProb_col_new[n-k] = 0

     # 3. if call happens
     elif node_val == callPrice*callflag:
         ContinuationValue_col_new[n-k] = node_val
         ConvertProb_col_new[n-k] = 0

     else:
         ContinuationValue_col_new[n-k] = node_val


return ConvertProb_col_new, ContinuationValue_col_new, y_col_new

I am calling this function for every column in the lattice through a for loop. So essentially I am running a nested for loop for all the calculations.

My issue is - This is very slow. The function doesn't take much time. but the second iteration where I am calling the function through the for loop is very time consuming ( avg. times the function will be iterated in below for loop is close to 1000 or 1500 ) It takes almost 2.5 minutes to run the complete model which is very slow from standard modeling standpoint. As mentioned above, most of the time is taken by the nested for loop shown below:

temp_mat = np.empty((n,3))*(np.nan)
temp_mat[:,0] = ConvertProb[:, n-1]
temp_mat[:,1] = ContinuationValue[:, n-1]
temp_mat[:,2] = y[:, n-1]

ConvertProb_col_new = np.empty((n,1))*(np.nan)
ContinuationValue_col_new = np.empty((n,1))*(np.nan)
y_col_new = np.empty((n,1))*(np.nan)


for tau in range(1,n):    

    ConvertProb_col = temp_mat[:,0]
    ContinuationValue_col = temp_mat[:,1]
    y_col = temp_mat[:,2]

    ConversionValue_col = ConversionValue[:, n-tau-1]
    StockPrices_col = StockPrices[:, n-tau-1]



    out = column_calc(StockPrices_col, ConvertProb_col, y_col, ContinuationValue_col, ConversionValue_col, coupon_dates_index, convert_dates_index ,call_dates_index, put_dates_index, ConvertProb_col_new, ContinuationValue_col_new, y_col_new, tau, r, cs, dt,call_trigger,putPrice,callPrice)

    temp_mat[:,0] = out[0].reshape(np.shape(out[0])[0],)
    temp_mat[:,1] = out[1].reshape(np.shape(out[1])[0],)
    temp_mat[:,2] = out[2].reshape(np.shape(out[2])[0],)

#Final value
print(temp_mat[-1][1])

Is there any way I can reduce the time consumed in nested for loop? or is there any alternative that I can use instead of nested for loop. Please let me know. Thanks a lot !!!

0

There are 0 answers