Guide

Basics

Introduction

A probabilistic program is Julia code wrapped in a @model macro. It can use arbitrary Julia code, but to ensure correctness of inference it should not have external effects or modify global state. Stack-allocated variables are safe, but mutable heap-allocated objects may lead to subtle bugs when using task copying. By default Libtask deepcopies Array and Dict objects when copying task to avoid bugs with data stored in mutable structure in Turing models.

To specify distributions of random variables, Turing programs should use the ~ notation:

x ~ distr where x is a symbol and distr is a distribution. If x is undefined in the model function, inside the probabilistic program, this puts a random variable named x, distributed according to distr, in the current scope. distr can be a value of any type that implements rand(distr), which samples a value from the distribution distr. If x is defined, this is used for conditioning in a style similar to Anglican (another PPL). In this case, x is an observed value, assumed to have been drawn from the distribution distr. The likelihood is computed using logpdf(distr,y). The observe statements should be arranged so that every possible run traverses all of them in exactly the same order. This is equivalent to demanding that they are not placed inside stochastic control flow.

Available inference methods include Importance Sampling (IS), Sequential Monte Carlo (SMC), Particle Gibbs (PG), Hamiltonian Monte Carlo (HMC), Hamiltonian Monte Carlo with Dual Averaging (HMCDA) and The No-U-Turn Sampler (NUTS).

Simple Gaussian Demo

Below is a simple Gaussian demo illustrate the basic usage of Turing.jl.

# Import packages.
using Turing
using StatsPlots

# Define a simple Normal model with unknown mean and variance.
@model function gdemo(x, y)
~ InverseGamma(2, 3)
    m ~ Normal(0, sqrt(s²))
    x ~ Normal(m, sqrt(s²))
    return y ~ Normal(m, sqrt(s²))
end
gdemo (generic function with 2 methods)

Note: As a sanity check, the prior expectation of is mean(InverseGamma(2, 3)) = 3/(2 - 1) = 3 and the prior expectation of m is 0. This can be easily checked using Prior:

setprogress!(false)
p1 = sample(gdemo(missing, missing), Prior(), 100000)
Chains MCMC chain (100000×5×1 Array{Float64, 3}):

Iterations        = 1:1:100000
Number of chains  = 1
Samples per chain = 100000
Wall duration     = 1.9 seconds
Compute duration  = 1.9 seconds
parameters        = s², m, x, y
internals         = lp

Summary Statistics
  parameters      mean       std      mcse      ess_bulk      ess_tail      rh ⋯
      Symbol   Float64   Float64   Float64       Float64       Float64   Float ⋯

          s²    2.9876    6.7113    0.0213   100858.2737    98843.6463    1.00 ⋯
           m    0.0012    1.7309    0.0055    99185.9451   100007.2614    1.00 ⋯
           x    0.0006    2.4615    0.0079    97954.7139    98402.8375    1.00 ⋯
           y    0.0021    2.4322    0.0078    96984.8853    98348.0065    1.00 ⋯
                                                               2 columns omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

          s²    0.5407    1.1148    1.7905    3.1299   12.4836
           m   -3.4339   -0.9059    0.0026    0.9164    3.4232
           x   -4.8672   -1.2820    0.0020    1.2890    4.8602
           y   -4.7980   -1.2803    0.0008    1.2873    4.8048

We can perform inference by using the sample function, the first argument of which is our probabilistic program and the second of which is a sampler. More information on each sampler is located in the API.

#  Run sampler, collect results.
c1 = sample(gdemo(1.5, 2), SMC(), 1000)
c2 = sample(gdemo(1.5, 2), PG(10), 1000)
c3 = sample(gdemo(1.5, 2), HMC(0.1, 5), 1000)
c4 = sample(gdemo(1.5, 2), Gibbs(PG(10, :m), HMC(0.1, 5, :s²)), 1000)
c5 = sample(gdemo(1.5, 2), HMCDA(0.15, 0.65), 1000)
c6 = sample(gdemo(1.5, 2), NUTS(0.65), 1000)
┌ Info: Found initial step size
└   ϵ = 0.8
┌ Info: Found initial step size
└   ϵ = 0.8
Chains MCMC chain (1000×14×1 Array{Float64, 3}):

Iterations        = 501:1:1500
Number of chains  = 1
Samples per chain = 1000
Wall duration     = 2.07 seconds
Compute duration  = 2.07 seconds
parameters        = s², m
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   e ⋯
      Symbol   Float64   Float64   Float64    Float64    Float64   Float64     ⋯

          s²    2.1176    1.8027    0.1024   399.3615   442.7275    1.0096     ⋯
           m    1.1080    0.8467    0.0450   433.3786   254.6344    1.0010     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

          s²    0.5629    1.0177    1.5094    2.5184    7.0535
           m   -0.9336    0.6534    1.1363    1.6474    2.6661

The MCMCChains module (which is re-exported by Turing) provides plotting tools for the Chain objects returned by a sample function. See the MCMCChains repository for more information on the suite of tools available for diagnosing MCMC chains.

# Summarise results
describe(c3)

# Plot results
plot(c3)
savefig("gdemo-plot.png")

The arguments for each sampler are:

  • SMC: number of particles.
  • PG: number of particles, number of iterations.
  • HMC: leapfrog step size, leapfrog step numbers.
  • Gibbs: component sampler 1, component sampler 2, …
  • HMCDA: total leapfrog length, target accept ratio.
  • NUTS: number of adaptation steps (optional), target accept ratio.

For detailed information on the samplers, please review Turing.jl’s API documentation.

Modelling Syntax Explained

Using this syntax, a probabilistic model is defined in Turing. The model function generated by Turing can then be used to condition the model onto data. Subsequently, the sample function can be used to generate samples from the posterior distribution.

In the following example, the defined model is conditioned to the data (arg1 = 1, arg2 = 2) by passing (1, 2) to the model function.

@model function model_name(arg_1, arg_2)
    return ...
end

The conditioned model can then be passed onto the sample function to run posterior inference.

model_func = model_name(1, 2)
chn = sample(model_func, HMC(..)) # Perform inference by sampling using HMC.

The returned chain contains samples of the variables in the model.

var_1 = mean(chn[:var_1]) # Taking the mean of a variable named var_1.

The key (:var_1) can be a Symbol or a String. For example, to fetch x[1], one can use chn[Symbol("x[1]")] or chn["x[1]"]. If you want to retrieve all parameters associated with a specific symbol, you can use group. As an example, if you have the parameters "x[1]", "x[2]", and "x[3]", calling group(chn, :x) or group(chn, "x") will return a new chain with only "x[1]", "x[2]", and "x[3]".

Turing does not have a declarative form. More generally, the order in which you place the lines of a @model macro matters. For example, the following example works:

# Define a simple Normal model with unknown mean and variance.
@model function model_function(y)
    s ~ Poisson(1)
    y ~ Normal(s, 1)
    return y
end

sample(model_function(10), SMC(), 100)
Chains MCMC chain (100×3×1 Array{Float64, 3}):

Log evidence      = -23.51361988290209
Iterations        = 1:1:100
Number of chains  = 1
Samples per chain = 100
Wall duration     = 1.96 seconds
Compute duration  = 1.96 seconds
parameters        = s
internals         = lp, weight

Summary Statistics
  parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   e ⋯
      Symbol   Float64   Float64   Float64    Float64    Float64   Float64     ⋯

           s    3.9900    0.1000    0.0100   100.0801        NaN    1.0000     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

           s    4.0000    4.0000    4.0000    4.0000    4.0000

But if we switch the s ~ Poisson(1) and y ~ Normal(s, 1) lines, the model will no longer sample correctly:

# Define a simple Normal model with unknown mean and variance.
@model function model_function(y)
    y ~ Normal(s, 1)
    s ~ Poisson(1)
    return y
end

sample(model_function(10), SMC(), 100)

Sampling Multiple Chains

Turing supports distributed and threaded parallel sampling. To do so, call sample(model, sampler, parallel_type, n, n_chains), where parallel_type can be either MCMCThreads() or MCMCDistributed() for thread and parallel sampling, respectively.

Having multiple chains in the same object is valuable for evaluating convergence. Some diagnostic functions like gelmandiag require multiple chains.

If you do not want parallelism or are on an older version Julia, you can sample multiple chains with the mapreduce function:

# Replace num_chains below with however many chains you wish to sample.
chains = mapreduce(c -> sample(model_fun, sampler, 1000), chainscat, 1:num_chains)

The chains variable now contains a Chains object which can be indexed by chain. To pull out the first chain from the chains object, use chains[:,:,1]. The method is the same if you use either of the below parallel sampling methods.

Multithreaded sampling

If you wish to perform multithreaded sampling and are running Julia 1.3 or greater, you can call sample with the following signature:

using Turing

@model function gdemo(x)
~ InverseGamma(2, 3)
    m ~ Normal(0, sqrt(s²))

    for i in eachindex(x)
        x[i] ~ Normal(m, sqrt(s²))
    end
end

model = gdemo([1.5, 2.0])

# Sample four chains using multiple threads, each with 1000 samples.
sample(model, NUTS(), MCMCThreads(), 1000, 4)

Be aware that Turing cannot add threads for you – you must have started your Julia instance with multiple threads to experience any kind of parallelism. See the Julia documentation for details on how to achieve this.

Distributed sampling

To perform distributed sampling (using multiple processes), you must first import Distributed.

Process parallel sampling can be done like so:

# Load Distributed to add processes and the @everywhere macro.
using Distributed

# Load Turing.
using Turing

# Add four processes to use for sampling.
addprocs(4; exeflags="--project=$(Base.active_project())")

# Initialize everything on all the processes.
# Note: Make sure to do this after you've already loaded Turing,
#       so each process does not have to precompile.
#       Parallel sampling may fail silently if you do not do this.
@everywhere using Turing

# Define a model on all processes.
@everywhere @model function gdemo(x)
~ InverseGamma(2, 3)
    m ~ Normal(0, sqrt(s²))

    for i in eachindex(x)
        x[i] ~ Normal(m, sqrt(s²))
    end
end

# Declare the model instance everywhere.
@everywhere model = gdemo([1.5, 2.0])

# Sample four chains using multiple processes, each with 1000 samples.
sample(model, NUTS(), MCMCDistributed(), 1000, 4)

Sampling from an Unconditional Distribution (The Prior)

Turing allows you to sample from a declared model’s prior. If you wish to draw a chain from the prior to inspect your prior distributions, you can simply run

chain = sample(model, Prior(), n_samples)

You can also run your model (as if it were a function) from the prior distribution, by calling the model without specifying inputs or a sampler. In the below example, we specify a gdemo model which returns two variables, x and y. The model includes x and y as arguments, but calling the function without passing in x or y means that Turing’s compiler will assume they are missing values to draw from the relevant distribution. The return statement is necessary to retrieve the sampled x and y values. Assign the function with missing inputs to a variable, and Turing will produce a sample from the prior distribution.

@model function gdemo(x, y)
~ InverseGamma(2, 3)
    m ~ Normal(0, sqrt(s²))
    x ~ Normal(m, sqrt(s²))
    y ~ Normal(m, sqrt(s²))
    return x, y
end
gdemo (generic function with 2 methods)

Assign the function with missing inputs to a variable, and Turing will produce a sample from the prior distribution.

# Samples from p(x,y)
g_prior_sample = gdemo(missing, missing)
g_prior_sample()
(0.21689032393768354, 0.9106552490733402)

Sampling from a Conditional Distribution (The Posterior)

Treating observations as random variables

Inputs to the model that have a value missing are treated as parameters, aka random variables, to be estimated/sampled. This can be useful if you want to simulate draws for that parameter, or if you are sampling from a conditional distribution. Turing supports the following syntax:

@model function gdemo(x, ::Type{T}=Float64) where {T}
    if x === missing
        # Initialize `x` if missing
        x = Vector{T}(undef, 2)
    end
~ InverseGamma(2, 3)
    m ~ Normal(0, sqrt(s²))
    for i in eachindex(x)
        x[i] ~ Normal(m, sqrt(s²))
    end
end

# Construct a model with x = missing
model = gdemo(missing)
c = sample(model, HMC(0.01, 5), 500)
Chains MCMC chain (500×14×1 Array{Float64, 3}):

Iterations        = 1:1:500
Number of chains  = 1
Samples per chain = 500
Wall duration     = 3.59 seconds
Compute duration  = 3.59 seconds
parameters        = s², m, x[1], x[2]
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   e ⋯
      Symbol   Float64   Float64   Float64    Float64    Float64   Float64     ⋯

          s²    1.7646    0.5202    0.3034     3.0832    34.7560    1.2417     ⋯
           m   -2.4500    0.4293    0.3606     1.4941    12.9960    1.8168     ⋯
        x[1]   -3.1079    1.1218    1.0478     1.3386    20.6245    2.1019     ⋯
        x[2]   -3.3455    0.2068    0.0635    11.1403    33.2579    1.1103     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

          s²    0.9549    1.3504    1.7067    2.1618    2.7703
           m   -3.0589   -2.7882   -2.5014   -2.1619   -1.5329
        x[1]   -4.6647   -4.3937   -3.1063   -1.9567   -1.6020
        x[2]   -3.7159   -3.5014   -3.3424   -3.1970   -2.9207

Note the need to initialize x when missing since we are iterating over its elements later in the model. The generated values for x can be extracted from the Chains object using c[:x].

Turing also supports mixed missing and non-missing values in x, where the missing ones will be treated as random variables to be sampled while the others get treated as observations. For example:

@model function gdemo(x)
~ InverseGamma(2, 3)
    m ~ Normal(0, sqrt(s²))
    for i in eachindex(x)
        x[i] ~ Normal(m, sqrt(s²))
    end
end

# x[1] is a parameter, but x[2] is an observation
model = gdemo([missing, 2.4])
c = sample(model, HMC(0.01, 5), 500)
Chains MCMC chain (500×13×1 Array{Float64, 3}):

Iterations        = 1:1:500
Number of chains  = 1
Samples per chain = 500
Wall duration     = 2.16 seconds
Compute duration  = 2.16 seconds
parameters        = s², m, x[1]
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   e ⋯
      Symbol   Float64   Float64   Float64    Float64    Float64   Float64     ⋯

          s²    1.2083    0.3091    0.1111     8.4989    10.8746    1.1981     ⋯
           m    0.0466    0.3026    0.2095     2.0851    19.5833    1.4602     ⋯
        x[1]   -0.2732    0.2358    0.1172     3.8890    24.3829    1.2772     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

          s²    0.4840    1.0602    1.2192    1.4398    1.7315
           m   -0.4513   -0.1515   -0.0349    0.2175    0.7494
        x[1]   -0.8260   -0.3963   -0.2528   -0.1141    0.2008

Default Values

Arguments to Turing models can have default values much like how default values work in normal Julia functions. For instance, the following will assign missing to x and treat it as a random variable. If the default value is not missing, x will be assigned that value and will be treated as an observation instead.

using Turing

@model function generative(x=missing, ::Type{T}=Float64) where {T<:Real}
    if x === missing
        # Initialize x when missing
        x = Vector{T}(undef, 10)
    end
~ InverseGamma(2, 3)
    m ~ Normal(0, sqrt(s²))
    for i in 1:length(x)
        x[i] ~ Normal(m, sqrt(s²))
    end
    return s², m
end

m = generative()
chain = sample(m, HMC(0.01, 5), 1000)
Chains MCMC chain (1000×22×1 Array{Float64, 3}):

Iterations        = 1:1:1000
Number of chains  = 1
Samples per chain = 1000
Wall duration     = 2.94 seconds
Compute duration  = 2.94 seconds
parameters        = s², m, x[1], x[2], x[3], x[4], x[5], x[6], x[7], x[8], x[9], x[10]
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   e ⋯
      Symbol   Float64   Float64   Float64    Float64    Float64   Float64     ⋯

          s²    1.2907    0.3501    0.1574     4.7562    20.3852    1.3325     ⋯
           m   -0.4801    0.5103    0.3296     2.6225    12.3885    1.9807     ⋯
        x[1]   -1.9923    0.4293    0.1629     7.4875    20.4989    1.0935     ⋯
        x[2]    0.0744    0.6247    0.3122     4.1295    12.3680    1.1989     ⋯
        x[3]    1.0869    0.3417    0.1564     4.5587    19.5343    1.3805     ⋯
        x[4]   -0.1130    0.8696    0.5535     2.6546    18.7190    1.8228     ⋯
        x[5]   -0.6417    0.6785    0.4081     2.6274    23.6014    1.9741     ⋯
        x[6]   -1.3194    0.3583    0.1331     7.5210    40.2108    1.0804     ⋯
        x[7]   -1.6365    0.8520    0.5783     2.5372    19.0320    2.1070     ⋯
        x[8]   -0.4192    0.7423    0.4173     3.5757    19.7061    1.4328     ⋯
        x[9]   -0.2921    1.1718    0.7777     2.4237    12.3402    2.0959     ⋯
       x[10]   -0.6824    0.4871    0.2616     4.7560    14.9318    1.3413     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

          s²    0.7811    1.0394    1.2259    1.4739    2.1606
           m   -1.4594   -0.8881   -0.3716   -0.0798    0.3999
        x[1]   -2.8478   -2.2833   -1.8753   -1.6629   -1.3168
        x[2]   -0.8857   -0.4591    0.0732    0.6089    1.3058
        x[3]    0.4892    0.8533    1.0687    1.2654    1.7603
        x[4]   -1.5938   -1.0537    0.1362    0.5889    1.2048
        x[5]   -1.5492   -1.0815   -0.8980   -0.5376    0.7234
        x[6]   -1.9322   -1.5897   -1.3589   -0.9923   -0.6584
        x[7]   -2.7654   -2.4656   -1.7761   -0.8106   -0.3208
        x[8]   -2.0858   -0.5765   -0.1491    0.0755    0.4498
        x[9]   -1.9952   -1.5061   -0.1701    0.5789    1.7933
       x[10]   -1.3226   -1.0339   -0.8514   -0.2214    0.3030

Access Values inside Chain

You can access the values inside a chain several ways:

  1. Turn them into a DataFrame object
  2. Use their raw AxisArray form
  3. Create a three-dimensional Array object

For example, let c be a Chain:

  1. DataFrame(c) converts c to a DataFrame,
  2. c.value retrieves the values inside c as an AxisArray, and
  3. c.value.data retrieves the values inside c as a 3D Array.

Variable Types and Type Parameters

The element type of a vector (or matrix) of random variables should match the eltype of its prior distribution, <: Integer for discrete distributions and <: AbstractFloat for continuous distributions. Moreover, if the continuous random variable is to be sampled using a Hamiltonian sampler, the vector’s element type needs to either be:

  1. Real to enable auto-differentiation through the model which uses special number types that are sub-types of Real, or

  2. Some type parameter T defined in the model header using the type parameter syntax, e.g. function gdemo(x, ::Type{T} = Float64) where {T}.

Similarly, when using a particle sampler, the Julia variable used should either be:

  1. An Array, or

  2. An instance of some type parameter T defined in the model header using the type parameter syntax, e.g. function gdemo(x, ::Type{T} = Vector{Float64}) where {T}.

Querying Probabilities from Model or Chain

Turing offers three functions: loglikelihood, logprior, and logjoint to query the log-likelihood, log-prior, and log-joint probabilities of a model, respectively.

Let’s look at a simple model called gdemo:

@model function gdemo0()
    s ~ InverseGamma(2, 3)
    m ~ Normal(0, sqrt(s))
    return x ~ Normal(m, sqrt(s))
end
gdemo0 (generic function with 2 methods)

If we observe x to be 1.0, we can condition the model on this datum using the condition syntax:

model = gdemo0() | (x=1.0,)
DynamicPPL.Model{typeof(gdemo0), (), (), (), Tuple{}, Tuple{}, DynamicPPL.ConditionContext{@NamedTuple{x::Float64}, DynamicPPL.DefaultContext}}(Main.Notebook.gdemo0, NamedTuple(), NamedTuple(), ConditionContext((x = 1.0,), DynamicPPL.DefaultContext()))

Now, let’s compute the log-likelihood of the observation given specific values of the model parameters, s and m:

loglikelihood(model, (s=1.0, m=1.0))
-0.9189385332046728

We can easily verify that value in this case:

logpdf(Normal(1.0, 1.0), 1.0)
-0.9189385332046728

We can also compute the log-prior probability of the model for the same values of s and m:

logprior(model, (s=1.0, m=1.0))
-2.221713955868453
logpdf(InverseGamma(2, 3), 1.0) + logpdf(Normal(0, sqrt(1.0)), 1.0)
-2.221713955868453

Finally, we can compute the log-joint probability of the model parameters and data:

logjoint(model, (s=1.0, m=1.0))
-3.1406524890731258
logpdf(Normal(1.0, 1.0), 1.0) +
logpdf(InverseGamma(2, 3), 1.0) +
logpdf(Normal(0, sqrt(1.0)), 1.0)
-3.1406524890731258

Querying with Chains object is easy as well:

chn = sample(model, Prior(), 10)
Chains MCMC chain (10×3×1 Array{Float64, 3}):

Iterations        = 1:1:10
Number of chains  = 1
Samples per chain = 10
Wall duration     = 0.02 seconds
Compute duration  = 0.02 seconds
parameters        = s, m
internals         = lp

Summary Statistics
  parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   e ⋯
      Symbol   Float64   Float64   Float64    Float64    Float64   Float64     ⋯

           s    3.2231    3.2510    1.0281    10.0000    10.0000    1.1211     ⋯
           m    0.1841    1.2450    0.4010    10.0000    10.0000    0.9450     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

           s    0.4803    0.8402    1.6592    4.9897    9.0019
           m   -1.9520   -0.5182    0.6912    1.1088    1.3676
loglikelihood(model, chn)
10×1 Matrix{Float64}:
 -2.3983210001692368
 -1.3283502999348715
 -0.7436599535487899
 -1.6762479677814528
 -0.9018564706885747
 -2.397120912293726
 -2.6077059338004043
 -0.987345738446884
 -2.038569785746701
 -0.6660040185429477

Maximum likelihood and maximum a posterior estimates

Turing also has functions for estimating the maximum aposteriori and maximum likelihood parameters of a model. This can be done with

mle_estimate = maximum_likelihood(model)
map_estimate = maximum_a_posteriori(model)
UndefVarError: UndefVarError(:maximum_likelihood)
UndefVarError: `maximum_likelihood` not defined
Stacktrace:
 [1] top-level scope
   @ ~/work/docs/docs/tutorials/docs-12-using-turing-guide/index.qmd:580

For more details see the mode estimation page.

Beyond the Basics

Compositional Sampling Using Gibbs

Turing.jl provides a Gibbs interface to combine different samplers. For example, one can combine an HMC sampler with a PG sampler to run inference for different parameters in a single model as below.

@model function simple_choice(xs)
    p ~ Beta(2, 2)
    z ~ Bernoulli(p)
    for i in 1:length(xs)
        if z == 1
            xs[i] ~ Normal(0, 1)
        else
            xs[i] ~ Normal(2, 1)
        end
    end
end

simple_choice_f = simple_choice([1.5, 2.0, 0.3])

chn = sample(simple_choice_f, Gibbs(HMC(0.2, 3, :p), PG(20, :z)), 1000)
Chains MCMC chain (1000×3×1 Array{Float64, 3}):

Iterations        = 1:1:1000
Number of chains  = 1
Samples per chain = 1000
Wall duration     = 13.76 seconds
Compute duration  = 13.76 seconds
parameters        = p, z
internals         = lp

Summary Statistics
  parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   e ⋯
      Symbol   Float64   Float64   Float64    Float64    Float64   Float64     ⋯

           p    0.4631    0.2064    0.0192   112.9486   181.5519    1.0065     ⋯
           z    0.1790    0.3835    0.0171   504.8452        NaN    0.9992     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

           p    0.0917    0.3092    0.4577    0.6109    0.8565
           z    0.0000    0.0000    0.0000    0.0000    1.0000

The Gibbs sampler can be used to specify unique automatic differentiation backends for different variable spaces. Please see the Automatic Differentiation article for more.

For more details of compositional sampling in Turing.jl, please check the corresponding paper.

Working with filldist and arraydist

Turing provides filldist(dist::Distribution, n::Int) and arraydist(dists::AbstractVector{<:Distribution}) as a simplified interface to construct product distributions, e.g., to model a set of variables that share the same structure but vary by group.

Constructing product distributions with filldist

The function filldist provides a general interface to construct product distributions over distributions of the same type and parameterisation. Note that, in contrast to the product distribution interface provided by Distributions.jl (Product), filldist supports product distributions over univariate or multivariate distributions.

Example usage:

@model function demo(x, g)
    k = length(unique(g))
    a ~ filldist(Exponential(), k) # = Product(fill(Exponential(), k))
    mu = a[g]
    return x .~ Normal.(mu)
end
demo (generic function with 2 methods)

Constructing product distributions with arraydist

The function arraydist provides a general interface to construct product distributions over distributions of varying type and parameterisation. Note that in contrast to the product distribution interface provided by Distributions.jl (Product), arraydist supports product distributions over univariate or multivariate distributions.

Example usage:

@model function demo(x, g)
    k = length(unique(g))
    a ~ arraydist([Exponential(i) for i in 1:k])
    mu = a[g]
    return x .~ Normal.(mu)
end
demo (generic function with 2 methods)

Working with MCMCChains.jl

Turing.jl wraps its samples using MCMCChains.Chain so that all the functions working for MCMCChains.Chain can be re-used in Turing.jl. Two typical functions are MCMCChains.describe and MCMCChains.plot, which can be used as follows for an obtained chain chn. For more information on MCMCChains, please see the GitHub repository.

describe(chn) # Lists statistics of the samples.
plot(chn) # Plots statistics of the samples.

There are numerous functions in addition to describe and plot in the MCMCChains package, such as those used in convergence diagnostics. For more information on the package, please see the GitHub repository.

Changing Default Settings

Some of Turing.jl’s default settings can be changed for better usage.

AD Chunk Size

ForwardDiff (Turing’s default AD backend) uses forward-mode chunk-wise AD. The chunk size can be set manually by AutoForwardDiff(;chunksize=new_chunk_size).

AD Backend

Turing supports four automatic differentiation (AD) packages in the back end during sampling. The default AD backend is ForwardDiff for forward-mode AD. Three reverse-mode AD backends are also supported, namely Tracker, Zygote and ReverseDiff. Zygote and ReverseDiff are supported optionally if explicitly loaded by the user with using Zygote or using ReverseDiff next to using Turing.

For more information on Turing’s automatic differentiation backend, please see the Automatic Differentiation article.

Progress Logging

Turing.jl uses ProgressLogging.jl to log the sampling progress. Progress logging is enabled as default but might slow down inference. It can be turned on or off by setting the keyword argument progress of sample to true or false. Moreover, you can enable or disable progress logging globally by calling setprogress!(true) or setprogress!(false), respectively.

Turing uses heuristics to select an appropriate visualization backend. If you use Jupyter notebooks, the default backend is ConsoleProgressMonitor.jl. In all other cases, progress logs are displayed with TerminalLoggers.jl. Alternatively, if you provide a custom visualization backend, Turing uses it instead of the default backend.

Back to top