Allocate bigmemory 3D array

73 views Asked by At

I want to use the mclapply function from the parallel package to calculate values and assigning them to a 3D array. The computation is long, and I need to parallelize it. Then, I want to put the results into a 3D array. My first guess was to use <<- like that:

library(parallel)

array(NA, c(10,10,3)) -> result_array

base::expand.grid(1:dim(result_array)[1], 1:dim(result_array)[2]) -> grid_df

mclapply(1:nrow(grid_df), mc.cores = 5, function(i){

    grid_df$Var1[i] -> x_position
    grid_df$Var2[i] -> y_position

    #do calculation
    1:3 -> calculation_result

    result_array[x_position, y_position,] <<- calculation_result
}

This does not work :), because the individual workers can't access the main copy of result_array anymore. I know it is possible to put the results into a dataframe and then make an array later. However, I have a lot of rows, and without an array structure actually it reaches the max rows a dataframe can have. I looked into some data structures which can be accessed from within the parallel workers, and found big.matrix from the bigmemory package. However, these are only 2D matrix, not a 3D array. Is there something similar for a 3D array? Or any other suggestion how I could solve my dilemma?

Thanks a lot!

0

There are 0 answers