Same optimization code different results on different computers

3.3k views Asked by At

I am running nested optimization code.

sp.optimize.minimize(fun=A, x0=D, method="SLSQP", bounds=(E), constraints=({'type':'eq','fun':constrains}), options={'disp': True, 'maxiter':100, 'ftol':1e-05})

sp.optimize.minimize(fun=B, x0=C, method="Nelder-Mead", options={'disp': True})

The first minimization is the part of the function B, so it is kind of running inside the second minimization.

And the whole optimization is based on the data, there's no random number involved.

I run the exactly same code on two different computers, and get the totally different results.

I have installed different versions of anaconda, but

scipy, numpy, and all the packages used have the same versions.

I don't really think OS would matter, but one is windows 10 (64bit), and the other one is windows 8.1 (64 bit)

I am trying to figure out what might be causing this.

Even though I did not state the whole options, if two computers are running the same code, shouldn't the results be the same?

or are there any options for sp.optimize that default values are set to be different from computer to computer?

PS. I was looking at the option "eps". Is it possible that default values of "eps" are different on these computers?

2

There are 2 answers

4
zimmerrol On BEST ANSWER

You should never expect numerical methods to perform identically on different devices; or even different runs of the same code on the same device. Due to the finite precision of the machine you can never calculate the "real" result, but only numerical approximations. During a long optimization task these differences can sum up.

Furthermore, some optimazion methods use some kind of randomness on the inside to solve the problem of being stuck in local minima: they add a small, alomost vanishing noise to the previous calculated solution to allow the algorithm to converge faster in the global minimum and not being stuck in a local minimum or a saddle-point.

Can you try to plot the landscape of the function you want to minimize? This can help you to analyze the problem: If both of the results (on each machine) are local minima, then this behaviour can be explained by my previous description.

If this is not the case, you should check the version of scipy you have installed on both machines. Maybe you are implicitly using float values on one device and double values on the other one, too?

You see: there are a lot of possible explanations for this (at the first look) strange numerical behaviour; you have to give us more details to solve this.

0
Eric Saund On

I found that different versions of SciPy do or do not allow minimum and maximum bounds to be the same. For example, in SciPy version 1.5.4, a parameter with equal min and max bounds sends that term's Jacobian to nan, which brings the minimization to a premature stop.