I'm having a problem to come with an efficient way to aggregate a high resolution numpy array to a coarse array based on row and column mapping keys that shows the corresponding row and column numbers in the coarse resolution for each cell in the fine resolution, a very simplified toy version of the problem: Note: there is no mathematical pattern for the keys that might simplify the transformation.

#Fine resolution value array (6 by 6):


 FIN=np.array([[44., 92., 65., 38., 44., 53.],
               [ 3., 24., 33., 60., 74., 55.],
               [89.,  1.,  9., 16., 79., 22.],
               [55., 69., 37., 97., 55., 89.],
               [ 4., 35., 81., 81.,  2., 20.],
               [63.,  6., 16., 59., 14., 37.]]

#Coarse resolution value array (3 by 3):

 COR=np.array([[0., 0., 0.],
               [0., 0., 0.],
               [0., 0., 0.]]
#ROWS (row number of the coarse resolution):
 ROWS=np.array([[0, 0, 0, 0, 1, 1],
                [0, 0, 0, 0, 1, 1],
                [1, 1, 1, 1, 1, 1],
                [1, 1, 1, 1, 1, 1],
                [2, 2, 2, 2, 2, 2],
                [2, 2, 2, 2, 2, 2]]
#COLS (col number of the coarse resolution):
 COLS=np.array([[0, 0, 1, 1, 2, 2],
               [0, 0, 1, 1, 2, 2],
               [0, 0, 1, 1, 2, 2],
               [0, 0, 1, 1, 2, 2],
               [1, 1, 1, 1, 2, 2],
               [1, 1, 1, 1, 2, 2]])

#My brute force solution is to loop over the rows and columns as follows:
    


    for i in range(len(FIN.shape[0])):
         for j in range(len(FIN.shape[0])):
             COR[i,j]=np.mean(FIN[np.logical_and(ROWS==i,COLS[1]==j)])

This becomes very slow when the sizes of the arrays become very large. Is there more efficient way to aggregate based on row/col mapping keys?

0

There are 0 answers