Julia parallel programming - Making existing function available to all workers

1.6k views Asked by At

I am faced with the following problem:

I have a function called TrainModel that runs for a very long time on a single thread. When it finishes computing, it returns a function as an output argument, let's call it f. When I enquire the type of this f, Julia returns:

(generic function with 1 method)

(I am not sure of this last piece of information is useful to anyone reading this)

Now in a second step, I need to apply function f on a very large array of values. This is a step that I would like to parallelise. Having had started Julia with multiple processes, e.g.

julia -p 4

ideally, I would use:

pmap(f, my_values)

or perhaps:

aux = @parallel (hcat) for ii=1:100000000
        f(my_values[ii])
      end

Unfortunately, this doesn't work. Julia complains that the workers are not aware of the function f, i.e. I get a messsage:

ERROR: function f not defined on process 2

How can I make function f available to all workers? Obviously a "dirty" solution would be to run the time-consuming function TrainModel on all workers, like this perhaps:

@everywhere f = TrainModel( ... )

but this would be a waste of cpu when all I want is that just the result f is available to all workers.

Though I searched for posts with similar problems, so far I could not find an answer...

Thanks in advance! best,

N.

2

There are 2 answers

1
drpetermolnar On BEST ANSWER

The approach to return the function seems elegant but unfortunately, unlike JavaScript, Julia does not resolve all the variables when creating the functions. Technically, your training function could produce the source code of the function with literal values for all the trained parameters. Then pass it to each of the worker processes, which can parse it in their environment to a callable function.

I suggest to return a data structure that contains all the information to produce the trained function: weights of an ANN, support vectors, decision rules ... Define a the "trained" function on the worker processes, such that it will utilized the trained parameters. You might want to have the ability of saving the results of the training to disk anyway, so that you can easily re-produce your computations.

0
konkam On

There is a Unix-only solution based on the PTools.jl package (https://github.com/amitmurthy/PTools.jl).

It relies on parallelism via forking instead of the Julia in-built mechanism. Forked processes are spawned with the same workspace as the main process, so all functions and variables are directly available to the workers.

This is a similar to the Fork clusters in R parallel package, so it can be used as the mclapply function.

The function of interest is pfork(n::Integer, f::Function, args...) and one noticeable difference with mclapply in R is that the function f must take as first argument the index of the worker.

An example:

Pkg.add("PTools")
Pkg.checkout("PTools") #to get the last version, else the package does not build at the time of writing

using PTools
f(workid,x) = x[workid] + 1
pfork(3, f, [1,2,3,4,5]) #Only the three first elements of the array will be computed

3-element Array{Any,1}:  
 2  
 3  
 4  

I expect that an interface to pfork will be built so that the first argument of the function will not need to be the index of the worker, but for the time being it can be used to solve the problem