Context:
- I'm developing an optimizer by using SciPy's differential evolution package. I get some good results with worker = 1, but I would like to speed up the runtime.
- I checked the following thread already regarding How to enable parallel in scipy.optimize.differential_evolution? Even if I add
if __name__ == "main":
and set workers = -1, the runtime is exactly the same. I tested my code on my local machine (2 physical, 4 logical processors) or on our server environment (16 cores) - I tested the following use case https://medium.com/@grvsinghal/speed-up-your-code-using-multiprocessing-in-python-36e4e703213e. Changing the number of workers does impact the runtime, so parallel processing work on my laptop and server as well.
- Consequently, my hypothesis is that the way I defined my objective and constraint function might be the problem.
Pseudo Code:
- The code is for work, so I can't share it. My objective and constraint functions need a lot of constants, hence I wrapped them into a Class. I know that the
args
parameter is for that, but the constraint function doesn't have that parameter. - The code structure looks the following way:
class MyClass:
def __init__(configuration, array1, array2, dataframe):
# assigning attributes so that
self.something = datafame[column1]
...
def obj(self, x):
# based on the initialized values + optimized parameters, it calculates the objective
def cons(self, x):
# based on the initialized values + optimized parameters, it calculates the constraint violations
Then, I create a class instance o = MyClass()
and call the differential evolution function with the class module: differential_evolution(func = o.obj, ...)
.
Question:
- Has anyone faced the following issue, i.e. even if you set multiple workers the code runs on one?
- Do you have any suggestions on how to better design the objective and constraint functions so that they are eligible for parallel processing?
Thank you!