I have a Python package that defines an ODE y'=f(y) corresponding to the semi-discretisation in space of a heat-like equation.
I want to test some integrators availabe in the DifferentialEquations module from Julia, without rewriting everything in Julia, which is a language I do not known very well. Therefore I turned to the diffeqpy package that enables the use of these directly in Python.
The PDE I solve is discretised with a few dozens to hundreds cells, and I need to perform lots of simulation for parametric studies, in particular with IMEX methods.
The issue is I cannot find a way to give an hint about the sparsity pattern of the Jacobian of f to implicit algorithms. In my current implementation, they just assume it is dense and therefore waste a lot of time computing it by finite-differences without coloring or any similar sparsity-based optimisation...
Therefore, I have two questions:
- If I have a Python function that generates a (possibly) sparse Jacobian for
f, how can I plug it in thediffeqpypackage ? - In most cases, my Jacobian is pentadiaognal, or at least its bandwidth is known. Is there any easy way to pass that information to Julia via Python ?
Here is a small example that can serve as a quick test platform for potential solutions:
from julia.api import Julia
jl = Julia(compiled_modules=False)
from diffeqpy import de
import numpy as np
from scipy.sparse import identity
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
n = 10 # dimension
atol = 1e-7
rtol = 1e-6
tspan = (0., 1.)
u0 = np.linspace(0.1, 10, n)
def f(u,p,t):
return -2*u
def df(u,p,t):
# return -2. * np.eye(n)
return -2. * identity(n)
prob = de.ODEProblem(f, u0, tspan) # works
# prob = de.ODEProblem(f, u0, tspan, jac=df) # fails
sol = de.solve(prob, alg=de.RadauIIA5(autodiff=False), abstol=atol, reltol=rtol)
sol2 = solve_ivp(fun=lambda t,y: f(y,None,t),
y0=u0, method='Radau', t_span=tspan,
atol=atol, rtol=rtol, jac=lambda t,y: df(y,None,t) )
plt.figure()
plt.plot(sol.t, sol.u, color='tab:red')
plt.plot(sol2.t, sol2.y.T, linestyle='--', linewidth=3, color='tab:blue')
plt.xlabel('t')
plt.ylabel('u')
plt.grid()
plt.show()
EDIT:
Passing both jac and jac_prototype as suggested works, but only if jac is in dense format. Also, how can I only give jac_prototype without jac ? The latter is not always available in practice, and I was hoping DifferentialEquations would be able to compute it via finite-difference on its own based on the provided sparsity pattern.
Here are the relevant code modifications, with some comments on what works and what does not:
def df(u,p,t):
return -2. * identity(n, format=form).todense() #fails without todense()
df_prototype = diags([1], offsets=[0], shape=(n,n), format=form) #.todense()
func = de.ODEFunction(f, jac=df, jac_prototype=df_prototype) # works
# func = de.ODEFunction(f, jac_prototype=df_prototype) # fails
prob = de.ODEProblem(func, u0, tspan)
# prob = de.ODEProblem(f, u0, tspan) # works with finte-difference dense jacobian
sol = de.solve(prob, alg=de.RadauIIA5(autodiff=False), abstol=atol, reltol=rtol, verbose=False)
If I omit the jac argument in the ODEFunction line, then the following error is thrown upon calling de.solve:
JULIA: MethodError: no method matching matrix_colors(::PyObject)if I use a sparse Jacobian prototypeJULIA: MethodError: no method matching findstructralnz(::Matrix{Float64})it I use a todense() to give a dense jacobian prototype
where
sparsejacis a sparse matrix matching the sparsity of the Jacobian, andjac(u,p,t)returns such a Jacobian.