I'm trying to solve the LotkaVolterra problem that Chris Rackauckas explained in the JuliaCon 2020 and get an error when try to calculate the loss between the prediction and the values.
I have a separate Project Environment for this exercise with it's own Manifest.toml and Project.toml files. I have added all the packages and modules, and I do not get any error when run the lines "using ...". I change the folder and activate the environment every time before I run the code. I even have removed all the packages and modules and installed Julia again. I always get the same error.
Here is my code, which is exactly the same Chris uses during the video in You Tube
## Environment and packages
cd(@__DIR__)
using Pkg; Pkg.activate("."); Pkg.instantiate()
using OrdinaryDiffEq
using ModelingToolkit
using DataDrivenDiffEq
using LinearAlgebra, DiffEqSensitivity, Optim
using DiffEqFlux, Flux
using Plots
gr()
## Data generation
function lotka!(du, u, p, t)
α, β, γ, δ = p
du[1] = α*u[1] - β*u[2]*u[1]
du[2] = γ*u[1]*u[2] - δ*u[2]
end
# Define the experimental parameter
tspan = (0.0,3.0)
u0 = [0.44249296,4.6280594]
p_ = [1.3, 0.9, 0.8, 1.8]
prob = ODEProblem(lotka!, u0,tspan, p_)
solution = solve(prob, Vern7(), abstol=1e-12, reltol=1e-12, saveat = 0.1)
scatter(solution, alpha = 0.25)
plot!(solution, alpha = 0.5)
# Ideal data
tsdata = Array(solution)
# Add noise in terms of the mean
noisy_data = tsdata + Float32(1e-5)*randn(eltype(tsdata), size(tsdata))
plot(abs.(tsdata-noisy_data)')
# Define the neural_network
ann = FastChain(FastDense(2,32,tanh), FastDense(32,32,tanh), FastDense(32,2))
p = initial_params(ann)
# Now we define the function with the parameters that we now (p_[1] and p_[4]) and we use the neural
# network for the parameters that we do not know (z[1] and z[2])
function dudt_(u,p,t)
x, y = u
z = ann(u,p)
[p_[1]*x + z[1],
-p_[4]*y + z[2]]
end
prob_nn = ODEProblem(dudt_, u0, tspan, p)
s = concrete_solve(prob_nn, Tsit5(), u0, p, saveat = solution.t)
plot(solution)
plot!(s)
# The predictions are not perfect, so we build a predict function and try to optimize
function predict(θ)
Array(concrete_solve(prob_nn, Vern7(), u0, θ, saveat = solution.t,
abstol=1e-6, reltol=1e-6,
sensealg = InterpolatingAdjoint(autojacvec = true)))
end
# No regularization right now
function loss(θ)
pred = predict(θ)
sum(abs2, noisy_data .- pred), pred
end
loss(p)
The error that I get is:
WARNING: both DiffEqFlux and DiffEqSensitivity export "InterpolatingAdjoint"; uses of it in module Main must be qualified
ERROR: UndefVarError: InterpolatingAdjoint not defined
I'm using Julia v1.6.7 in a Windows 10 OS and I code with VSCode.
While the OP should probably follow Mr. Rackauckas' advice, his answer doesn't address the error that the OP is getting. The error Julia is reporting comes from the keyword argument:
sensealg = InterpolatingAdjoint(autojacvec = true)and it basically tells us that two imported packages
DiffEqFluxandDiffEqSensitivityhave defined an object/function calledInterpolatingAdjointand so any use of it in the main module must be qualified; or in other words we must tell Julia which package'sInterpolatingAdjointwe would like Julia to use. This can be done as:DiffEqFlux.InterpolatingAdjointorDiffEqSensitivity.InterpolatingAdjoint.Now like Mr. Rackauckas pointed out, the video is pretty old and this qualified use of
InterpolatingAdjointmay or may not fix other issues that can exist with a deprecated package.