```
using Turing
using DifferentialEquations
# Load StatsPlots for visualizations and diagnostics.
using StatsPlots
using LinearAlgebra
# Set a seed for reproducibility.
using Random
Random.seed!(14);
```

# Bayesian Estimation of Differential Equations

Most of the scientific community deals with the basic problem of trying to mathematically model the reality around them and this often involves dynamical systems. The general trend to model these complex dynamical systems is through the use of differential equations. Differential equation models often have non-measurable parameters. The popular “forward-problem” of simulation consists of solving the differential equations for a given set of parameters, the “inverse problem” to simulation, known as parameter estimation, is the process of utilizing data to determine these model parameters. Bayesian inference provides a robust approach to parameter estimation with quantified uncertainty.

## The Lotka-Volterra Model

The Lotka–Volterra equations, also known as the predator–prey equations, are a pair of first-order nonlinear differential equations. These differential equations are frequently used to describe the dynamics of biological systems in which two species interact, one as a predator and the other as prey. The populations change through time according to the pair of equations

\[ \begin{aligned} \frac{\mathrm{d}x}{\mathrm{d}t} &= (\alpha - \beta y(t))x(t), \\ \frac{\mathrm{d}y}{\mathrm{d}t} &= (\delta x(t) - \gamma)y(t) \end{aligned} \]

where \(x(t)\) and \(y(t)\) denote the populations of prey and predator at time \(t\), respectively, and \(\alpha, \beta, \gamma, \delta\) are positive parameters.

We implement the Lotka-Volterra model and simulate it with parameters \(\alpha = 1.5\), \(\beta = 1\), \(\gamma = 3\), and \(\delta = 1\) and initial conditions \(x(0) = y(0) = 1\).

```
# Define Lotka-Volterra model.
function lotka_volterra(du, u, p, t)
# Model parameters.
= p
α, β, γ, δ # Current state.
= u
x, y
# Evaluate differential equations.
1] = (α - β * y) * x # prey
du[2] = (δ * x - γ) * y # predator
du[
return nothing
end
# Define initial-value problem.
= [1.0, 1.0]
u0 = [1.5, 1.0, 3.0, 1.0]
p = (0.0, 10.0)
tspan = ODEProblem(lotka_volterra, u0, tspan, p)
prob
# Plot simulation.
plot(solve(prob, Tsit5()))
```

We generate noisy observations to use for the parameter estimation tasks in this tutorial. With the `saveat`

argument we specify that the solution is stored only at `0.1`

time units. To make the example more realistic we add random normally distributed noise to the simulation.

```
= solve(prob, Tsit5(); saveat=0.1)
sol = Array(sol) + 0.8 * randn(size(Array(sol)))
odedata
# Plot simulation and noisy observations.
plot(sol; alpha=0.3)
scatter!(sol.t, odedata'; color=[1 2], label="")
```

Alternatively, we can use real-world data from Hudson’s Bay Company records (an Stan implementation with slightly different priors can be found here: https://mc-stan.org/users/documentation/case-studies/lotka-volterra-predator-prey.html).

## Direct Handling of Bayesian Estimation with Turing

Previously, functions in Turing and DifferentialEquations were not inter-composable, so Bayesian inference of differential equations needed to be handled by another package called DiffEqBayes.jl (note that DiffEqBayes works also with CmdStan.jl, Turing.jl, DynamicHMC.jl and ApproxBayes.jl - see the DiffEqBayes docs for more info).

Nowadays, however, Turing and DifferentialEquations are completely composable and we can just simulate differential equations inside a Turing `@model`

. Therefore, we write the Lotka-Volterra parameter estimation problem using the Turing `@model`

macro as below:

```
@model function fitlv(data, prob)
# Prior distributions.
~ InverseGamma(2, 3)
σ ~ truncated(Normal(1.5, 0.5); lower=0.5, upper=2.5)
α ~ truncated(Normal(1.2, 0.5); lower=0, upper=2)
β ~ truncated(Normal(3.0, 0.5); lower=1, upper=4)
γ ~ truncated(Normal(1.0, 0.5); lower=0, upper=2)
δ
# Simulate Lotka-Volterra model.
= [α, β, γ, δ]
p = solve(prob, Tsit5(); p=p, saveat=0.1)
predicted
# Observations.
for i in 1:length(predicted)
:, i] ~ MvNormal(predicted[i], σ^2 * I)
data[end
return nothing
end
= fitlv(odedata, prob)
model
# Sample 3 independent chains with forward-mode automatic differentiation (the default).
= sample(model, NUTS(), MCMCSerial(), 1000, 3; progress=false) chain
```

```
┌ Info: Found initial step size
└ ϵ = 0.025
┌ Info: Found initial step size
└ ϵ = 0.00625
┌ Info: Found initial step size
└ ϵ = 0.05
```

```
Chains MCMC chain (1000×17×3 Array{Float64, 3}):
Iterations = 501:1:1500
Number of chains = 3
Samples per chain = 1000
Wall duration = 52.38 seconds
Compute duration = 50.0 seconds
parameters = σ, α, β, γ, δ
internals = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ⋯
Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯
σ 0.8133 0.0407 0.0009 1931.2355 1698.3816 1.0007 ⋯
α 1.4914 0.0502 0.0018 814.9939 1168.5862 1.0027 ⋯
β 0.9617 0.0439 0.0013 1129.5565 1525.3517 1.0016 ⋯
γ 2.9748 0.1435 0.0050 829.3005 1182.3234 1.0027 ⋯
δ 1.0243 0.0544 0.0019 825.4290 1154.9963 1.0027 ⋯
1 column omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
σ 0.7399 0.7849 0.8119 0.8405 0.8972
α 1.3968 1.4567 1.4901 1.5249 1.5949
β 0.8819 0.9312 0.9602 0.9909 1.0540
γ 2.7056 2.8739 2.9725 3.0719 3.2658
δ 0.9220 0.9874 1.0221 1.0597 1.1361
```

The estimated parameters are close to the parameter values the observations were generated with. We can also check visually that the chains have converged.

`plot(chain)`

### Data retrodiction

In Bayesian analysis it is often useful to retrodict the data, i.e. generate simulated data using samples from the posterior distribution, and compare to the original data (see for instance section 3.3.2 - model checking of McElreath’s book “Statistical Rethinking”). Here, we solve the ODE for 300 randomly picked posterior samples in the `chain`

. We plot the ensemble of solutions to check if the solution resembles the data. The 300 retrodicted time courses from the posterior are plotted in gray, the noisy observations are shown as blue and red dots, and the green and purple lines are the ODE solution that was used to generate the data.

```
plot(; legend=false)
= sample(chain[[:α, :β, :γ, :δ]], 300; replace=false)
posterior_samples for p in eachrow(Array(posterior_samples))
= solve(prob, Tsit5(); p=p, saveat=0.1)
sol_p plot!(sol_p; alpha=0.1, color="#BBBBBB")
end
# Plot simulation and noisy observations.
plot!(sol; color=[1 2], linewidth=1)
scatter!(sol.t, odedata'; color=[1 2])
```