How to handle nondeterministic objective function with scipy.optimization.differential_evolution?

46 views Asked by At

I'm using the differential evolution solver to calibrate / optimize the parameters of a simulation. As I have lots of data ("scenarios") and the simulation is computationally expensive, I aim to run the simulation rarely. Therefore, I randomly select a few scenarios from the database in each iteration, simulate those with the current parameter vector, and evaluate the similarity of the simulation and real-world recordings. This similarity value is the objective value of the optimization. It depends on the selected scenarios (besides the parameter vector, of course).

Thus, when switching to the next iteration, the objective value might become worse even though the parameter vector itself has been improved (and vice versa).

The problem is with the _accept_trail method that only accepts a trail if it lowers the energy. While this makes sense for deterministic objective functions, it doesn't work in the nondeterministic case.

So far, I have tested two approaches:

  1. In the callback function, compute the population energies for the newly selected scenarios. Thus, the comparison of whether the objective value has been improved works correctly. However, this doubles the computational effort, as each scenario must be simulated twice (one time each with the old and updated parameter vector).
  2. Replacing engery_trail <= energy_orig in the _accept_trail method (see above) with True. Of course, this also impedes the convergence rate.

Is there another solution that would be better suited to modeling the nondeterminstic objective function problem? Is there a way to skip the _accept_trail method without forking SciPy?

Thanks and best regards!

0

There are 0 answers