Index
Turing.BinomialLogit
Turing.Flat
Turing.FlatPos
Turing.OrderedLogistic
Turing.Inference.Gibbs
Turing.Inference.HMC
Turing.Inference.HMCDA
Turing.Inference.IS
Turing.Inference.MH
Turing.Inference.NUTS
Turing.Inference.PG
Turing.Inference.SMC
Modelling
# DynamicPPL.@model
— Macro.
@model(expr[, warn = false])
Macro to specify a probabilistic model.
If warn
is true
, a warning is displayed if internal variable names are used in the model definition.
Examples
Model definition:
@model function model(x, y = 42)
...
end
To generate a Model
, call model(xvalue)
or model(xvalue, yvalue)
.
Samplers
# DynamicPPL.Sampler
— Type.
Sampler{T}
Generic sampler type for inference algorithms of type T
in DynamicPPL.
Sampler
should implement the AbstractMCMC interface, and in particular AbstractMCMC.step
. A default implementation of the initial sampling step is provided that supports resuming sampling from a previous state and setting initial parameter values. It requires to overload loadstate
and initialstep
for loading previous states and actually performing the initial sampling step, respectively. Additionally, sometimes one might want to implement initialsampler
that specifies how the initial parameter values are sampled if they are not provided. By default, values are sampled from the prior.
# Turing.Inference.Gibbs
— Type.
Gibbs(algs...)
Compositional MCMC interface. Gibbs sampling combines one or more sampling algorithms, each of which samples from a different set of variables in a model.
Example:
@model function gibbs_example(x)
v1 ~ Normal(0,1)
v2 ~ Categorical(5)
end
# Use PG for a 'v2' variable, and use HMC for the 'v1' variable.
# Note that v2 is discrete, so the PG sampler is more appropriate
# than is HMC.
alg = Gibbs(HMC(0.2, 3, :v1), PG(20, :v2))
One can also pass the number of iterations for each Gibbs component using the following syntax:
alg = Gibbs((HMC(0.2, 3, :v1), n_hmc), (PG(20, :v2), n_pg))
where n_hmc
and n_pg
are the number of HMC and PG iterations for each Gibbs iteration.
Tips:
HMC
andNUTS
are fast samplers and can throw off particle-based
methods like Particle Gibbs. You can increase the effectiveness of particle sampling by including more particles in the particle sampler.
# Turing.Inference.HMC
— Type.
HMC(ϵ::Float64, n_leapfrog::Int; adtype::ADTypes.AbstractADType = AutoForwardDiff(; chunksize=0))
Hamiltonian Monte Carlo sampler with static trajectory.
Arguments
ϵ
: The leapfrog step size to use.n_leapfrog
: The number of leapfrog steps to use.adtype
: The automatic differentiation (AD) backend. If not specified,ForwardDiff
is used, with itschunksize
automatically determined.
Usage
HMC(0.05, 10)
Tips
If you are receiving gradient errors when using HMC
, try reducing the leapfrog step size ϵ
, e.g.
# Original step size
sample(gdemo([1.5, 2]), HMC(0.1, 10), 1000)
# Reduced step size
sample(gdemo([1.5, 2]), HMC(0.01, 10), 1000)
# Turing.Inference.HMCDA
— Type.
HMCDA(
n_adapts::Int, δ::Float64, λ::Float64; ϵ::Float64 = 0.0;
adtype::ADTypes.AbstractADType = AutoForwardDiff(; chunksize=0),
)
Hamiltonian Monte Carlo sampler with Dual Averaging algorithm.
Usage
HMCDA(200, 0.65, 0.3)
Arguments
n_adapts
: Numbers of samples to use for adaptation.δ
: Target acceptance rate. 65% is often recommended.λ
: Target leapfrog length.ϵ
: Initial step size; 0 means automatically search by Turing.adtype
: The automatic differentiation (AD) backend. If not specified,ForwardDiff
is used, with itschunksize
automatically determined.
Reference
For more information, please view the following paper (arXiv link):
Hoffman, Matthew D., and Andrew Gelman. "The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." Journal of Machine Learning Research 15, no. 1 (2014): 1593-1623.
# Turing.Inference.IS
— Type.
IS()
Importance sampling algorithm.
Usage:
IS()
Example:
# Define a simple Normal model with unknown mean and variance.
@model function gdemo(x)
s² ~ InverseGamma(2,3)
m ~ Normal(0,sqrt.(s))
x[1] ~ Normal(m, sqrt.(s))
x[2] ~ Normal(m, sqrt.(s))
return s², m
end
sample(gdemo([1.5, 2]), IS(), 1000)
# Turing.Inference.MH
— Type.
MH(space...)
Construct a Metropolis-Hastings algorithm.
The arguments space
can be
- Blank (i.e.
MH()
), in which caseMH
defaults to using the prior for each parameter as the proposal distribution. - A set of one or more symbols to sample with
MH
in conjunction withGibbs
, i.e.Gibbs(MH(:m), PG(10, :s))
- An iterable of pairs or tuples mapping a
Symbol
to aAdvancedMH.Proposal
,Distribution
, orFunction
that generates returns a conditional proposal distribution. - A covariance matrix to use as for mean-zero multivariate normal proposals.
Examples
The default MH
will use propose samples from the prior distribution using AdvancedMH.StaticProposal
.
@model function gdemo(x, y)
s² ~ InverseGamma(2,3)
m ~ Normal(0, sqrt(s²))
x ~ Normal(m, sqrt(s²))
y ~ Normal(m, sqrt(s²))
end
chain = sample(gdemo(1.5, 2.0), MH(), 1_000)
mean(chain)
Alternatively, you can specify particular parameters to sample if you want to combine sampling from multiple samplers:
@model function gdemo(x, y)
s² ~ InverseGamma(2,3)
m ~ Normal(0, sqrt(s²))
x ~ Normal(m, sqrt(s²))
y ~ Normal(m, sqrt(s²))
end
# Samples s with MH and m with PG
chain = sample(gdemo(1.5, 2.0), Gibbs(MH(:s), PG(10, :m)), 1_000)
mean(chain)
Using custom distributions defaults to using static MH:
@model function gdemo(x, y)
s² ~ InverseGamma(2,3)
m ~ Normal(0, sqrt(s²))
x ~ Normal(m, sqrt(s²))
y ~ Normal(m, sqrt(s²))
end
# Use a static proposal for s and random walk with proposal
# standard deviation of 0.25 for m.
chain = sample(
gdemo(1.5, 2.0),
MH(
:s => InverseGamma(2, 3),
:m => Normal(0, 1)
),
1_000
)
mean(chain)
Specifying explicit proposals using the AdvancedMH
interface:
@model function gdemo(x, y)
s² ~ InverseGamma(2,3)
m ~ Normal(0, sqrt(s²))
x ~ Normal(m, sqrt(s²))
y ~ Normal(m, sqrt(s²))
end
# Use a static proposal for s and random walk with proposal
# standard deviation of 0.25 for m.
chain = sample(
gdemo(1.5, 2.0),
MH(
:s => AdvancedMH.StaticProposal(InverseGamma(2,3)),
:m => AdvancedMH.RandomWalkProposal(Normal(0, 0.25))
),
1_000
)
mean(chain)
Using a custom function to specify a conditional distribution:
@model function gdemo(x, y)
s² ~ InverseGamma(2,3)
m ~ Normal(0, sqrt(s²))
x ~ Normal(m, sqrt(s²))
y ~ Normal(m, sqrt(s²))
end
# Use a static proposal for s and and a conditional proposal for m,
# where the proposal is centered around the current sample.
chain = sample(
gdemo(1.5, 2.0),
MH(
:s => InverseGamma(2, 3),
:m => x -> Normal(x, 1)
),
1_000
)
mean(chain)
Providing a covariance matrix will cause MH
to perform random-walk sampling in the transformed space with proposals drawn from a multivariate normal distribution. The provided matrix must be positive semi-definite and square. Usage:
@model function gdemo(x, y)
s² ~ InverseGamma(2,3)
m ~ Normal(0, sqrt(s²))
x ~ Normal(m, sqrt(s²))
y ~ Normal(m, sqrt(s²))
end
# Providing a custom variance-covariance matrix
chain = sample(
gdemo(1.5, 2.0),
MH(
[0.25 0.05;
0.05 0.50]
),
1_000
)
mean(chain)
# Turing.Inference.NUTS
— Type.
NUTS(n_adapts::Int, δ::Float64; max_depth::Int=10, Δ_max::Float64=1000.0, init_ϵ::Float64=0.0; adtype::ADTypes.AbstractADType=AutoForwardDiff(; chunksize=0)
No-U-Turn Sampler (NUTS) sampler.
Usage:
NUTS() # Use default NUTS configuration.
NUTS(1000, 0.65) # Use 1000 adaption steps, and target accept ratio 0.65.
Arguments:
n_adapts::Int
: The number of samples to use with adaptation.δ::Float64
: Target acceptance rate for dual averaging.max_depth::Int
: Maximum doubling tree depth.Δ_max::Float64
: Maximum divergence during doubling tree.init_ϵ::Float64
: Initial step size; 0 means automatically searching using a heuristic procedure.adtype::ADTypes.AbstractADType
: The automatic differentiation (AD) backend. If not specified,ForwardDiff
is used, with itschunksize
automatically determined.
# Turing.Inference.PG
— Type.
struct PG{space, R} <: Turing.Inference.ParticleInference
Particle Gibbs sampler.
Fields
nparticles::Int64
: Number of particles.resampler::Any
: Resampling algorithm.
# Turing.Inference.SMC
— Type.
struct SMC{space, R} <: Turing.Inference.ParticleInference
Sequential Monte Carlo sampler.
Fields
resampler::Any
Distributions
# Turing.Flat
— Type.
Flat()
The flat distribution is the improper distribution of real numbers that has the improper probability density function
$$ f(x) = 1. $$
# Turing.FlatPos
— Type.
FlatPos(l::Real)
The positive flat distribution with real-valued parameter l
is the improper distribution of real numbers that has the improper probability density function
$$ f(x) = \begin{cases} 0 & \text{if } x \leq l, \ 1 & \text{otherwise}. \end{cases} $$
# Turing.BinomialLogit
— Type.
BinomialLogit(n, logitp)
The Binomial distribution with logit parameterization characterizes the number of successes in a sequence of independent trials.
It has two parameters: n
, the number of trials, and logitp
, the logit of the probability of success in an individual trial, with the distribution
$$ P(X = k) = {n \choose k}{(\text{logistic}(logitp))}^k (1 - \text{logistic}(logitp))^{n-k}, \quad \text{ for } k = 0,1,2, \ldots, n. $$
See also: Binomial
!!! warning "Missing docstring."
Missing docstring for VecBinomialLogit
. Check Documenter's build log for details.
# Turing.OrderedLogistic
— Type.
OrderedLogistic(η, c::AbstractVector)
The ordered logistic distribution with real-valued parameter η
and cutpoints c
has the probability mass function
$$ P(X = k) = \begin{cases} 1 - \text{logistic}(\eta - c_1) & \text{if } k = 1, \ \text{logistic}(\eta - c_{k-1}) - \text{logistic}(\eta - c_k) & \text{if } 1 < k < K, \ \text{logistic}(\eta - c_{K-1}) & \text{if } k = K, \end{cases} $$
where K = length(c) + 1
.