I'm trying to solve a model in python and to fit unknown parameters of the model to experimental data. The model consists of 2 ODEs and I solve it using scipy.integrate.solve_ivp. The parameters of the model are unknown, so, I want to fit them using a plethora of methods. My first choice is differential_evolution, as it can provide really nice results for much more complex models (used it earlier as a part of another package). However, the problem is that as I give differential_evolution my model (I want it to find the global minimum for the least-squares between calculated and experimental points) it finds parameters at which the model becomes unstable (LSODA cannot integrate it due to the internal step being 0). I tried to catch runtime warnings that LSODA throws in these circumstances, but that did not help. What would be the optimal way to solve the issue?

My model and my code are below:

def aggregation_model(time, z, k1, k2, k3, k_1, k_2):
    GPVI_0 = 55.5
    GPVI, GPVI_Clust = z

    dGPVIdt = - k1 * GPVI_Clust * GPVI + k_1 * (GPVI_0 - GPVI) - 2 * k2 * GPVI * GPVI
    dGPVI_Clustdt = - (k_2 * GPVI_Clust + k3) * GPVI_Clust + k2 * GPVI * GPVI

    return [dGPVIdt, dGPVI_Clustdt]


def res_squares(parameters):

    time = np.linspace(0, 300, 1000)

    timepoints = [0, 25, 50, 75, 100, 125, 150, 300]
    val = [0, 2, 5, 10, 15, 20, 20, 18]

    model_calc = solve_ivp(aggregation_model, [0, 300], [55.5, 0], args=parameters, max_step=100000,
                           dense_output=True, method='LSODA', rtol=1e-12, atol=1e-6)

    solution = model_calc.sol(time)
    transposed = list(map(list, zip(*solution.T)))

    size = [0, ]
    for i in range(1, len(transposed[0])):
        size.append((55.5 - transposed[0][i]) / transposed[1][i])

    diff = []
    for i in range(len(timepoints)):
        indexval = min(range(len(time)), key=lambda j: abs(time[j] - timepoints[i]))
        diff.append((val[i] - size[indexval])**2)

    # print(np.sqrt(np.sum(diff)))

    output = np.sum(diff)

    return output

if __name__ == '__main__':

    from scipy.integrate import solve_ivp
    from scipy.optimize import differential_evolution
    import numpy as np


    parameterBounds = []
    parameterBounds.append([-1000000, 1000000])  # parameter bounds for a
    parameterBounds.append([-1000000, 1000000])  # parameter bounds for a
    parameterBounds.append([-1000000, 1000000])  # parameter bounds for a
    parameterBounds.append([-1000000, 1000000])  # parameter bounds for a
    parameterBounds.append([-1000000, 1000000])  # parameter bounds for a

    result = differential_evolution(res_squares, parameterBounds, seed=3, disp=True, init='random')
    print(result.x)

    sol = solve_ivp(aggregation_model, [0, 300], [55.5, 0], args=parvalues,
                    dense_output=True, method='LSODA', rtol=1e-6, atol=1e-12)
1

There are 1 answers

0
astoeriko On

There are several ways to go about this and probably a combination of them is necessary to get a stable solution:

  1. Avoid parameter combinations that lead to unstable numerical behaviour. The current ranges you set for the parameters are very broad. I am sure that you do have at least a little bit of knowledge about the order of magnitude of the parameters so that you can constrain the parameter space somewhat more. If you feel uncomfortable with setting hard bounds, I would suggest taking a Bayesian approach and setting prior distributions that do not exclude very large or small values completely but do make them less likely. (If you go the latter way, I can recommend the PyMC3 package for building a Bayesian model in combination with Sunode to do the ODE integration.)
  2. Try to make the solution more numerically stable. What has helped me a lot in similar situations previously is a log-transform of the ODE. That means, instead of solving dy/dt = f(y) you solve dx/dt = exp(-x) f(exp(x)) where x = log(y). (This can derived by applying the chain rule of differentiation to dlog(y)/dt.)
  3. One problem in your case seems to be that the solver does not stop and throw and error when it fails to integrate further. Instead it seems to continue trying the next integration step forever (?). Using the minstep argument of the LSODA solver might help to resolve that. Setting min_step=1e-14 at least gets the optimizer through the first 11 stages of the differential_evolution algorithm. Afterwards, neither the solver nor the optimizer print any messages, so I am not sure what is happening.

By the way, you might want to pass t_eval=timepoints to solve_ivp so that the solver already returns the solution at the timepoints you need. That will spare you the interpolation that you currently do.