# Why there is a difference in ODE solution using event function in R?

I am solving a complicated system of nonlinear ODE using R as part of my current project. In a testing phase, I implemented solution using lsoda method by two ways (a) Using event function (b) without event function. Both solution produces slightly different results. I don't understand why it is happening or how to minimize error?

I tried Lotkaâ€“Volterra equations by these two methods and still wondering why there is a difference in solution (see attached plot). Although the difference is very small, I want to minimize error which might be big for my project. As you can see, event function is not changing dependent variables but still producing some errors.

rm(list=ls())

library(deSolve);
#############################################################
predpreyLV<-function(t,y,p){
N<-y[1]
P<-y[2]

with(as.list(p),{
dNdt<- r*N*(1-(N/1000))-a*P*N
dPdt<- -b*P+f*P*N
return(list(c(dNdt,dPdt)))
})
}
#############################################################
eventFun<-function(t,y,p){
return (y)
}
#############################################################

r<-0.5; a<-0.01; f<-0.01; b<-0.2;

# Simulation without using event function
p<-c(r=r,a=a, b=b, f=f)
y0<-c(N=25, P=5)
times<-seq(0,1000,0.1)
out1<-ode(y=y0,times,predpreyLV, p,method="lsoda", atol=1e-6, rtol=1e-6)

# Simulation using event function
p<-c(r=r,a=a, b=b, f=f)
y0<-c(N=25, P=5)
times<-seq(0,1000,0.1)
out2<-ode(y=y0,times,predpreyLV, p,method="lsoda",
events=list(func=eventFun, time=times), atol=1e-6, rtol=1e-6)

plot(out1[,1],abs(out1[,2]-out2[,2]), type="l", ylab="Difference in size")

On

You have to take into account that if not specifically indicated, all solvers have adaptive step size. This means that the solver uses internal steps that are usually not visible to the user, and uses specific polynomial interpolations between the internal nodes to produce the user-requested values.

However, as events can change the state, the solver can not proceed from the next internal node, it has to restart the solution process from the event time with the new state returned from the event function as initial state.

This has several effects. The last step before the event is cut short. The restart process likely uses smaller, non-optimal step sizes. The initialization of a multi-step method as used in lsoda uses a different method, probably also with non-typical step sizes. lsoda also adapts the order of the multi-step method. This also has to be determined empirically, advancing the solver with below-optimal steps sizes in the process.

This all, while not changing the accuracy of the solution much, still leads to a different integration path in terms of step sizes and methods/orders. If the system has stiff regions, as can be the case for polynomial non-linear right sides, these small changes can be magnified to the observed scale.

Still, relative to the maximum size 1000 of N, an error of 0.3 has a relative size of 3e-5, well within the expectation for the given tolerances.

To simulate the effect of different integration control processes I ran the given system (with python-scipy's solve_ivp with method "LSODA") for different tolerance inputs tol in 1e-6, 1e-8, 1e-10 using absolute and relative tolerance parameters atol=1e-2*tol, rtol=tol. These were then compared to a "gold" solution produced with tolerance tol=1e-14. This gives the overlayed solution paths

where there are no visible differences in the curves. For the components themselves and their error, divided by the tolerance value, I get the plots

Again there are no visible differences in the overlay of the three solutions in the first plot. The scaled error plot shows large oscillations corresponding to rapid changes in the solutions.