using Turing
@model function demo_model()
~ Normal()
x ~ Normal(x)
y 4.0 ~ Normal(y)
return nothing
end
demo_model (generic function with 2 methods)
Markov chain Monte Carlo sampling in Turing.jl is performed using the sample()
function. As described on the Core Functionality page, single-chain and multiple-chain sampling can be done using, respectively,
On top of this, both methods also accept a number of keyword arguments that allow you to control the sampling process. This page will detail these options.
To begin, let’s create a simple model:
demo_model (generic function with 2 methods)
Progress bars can be controlled with the progress
keyword argument. The exact values that can be used depend on whether you are using single-chain or multi-chain sampling.
For single-chain sampling, progress=true
and progress=false
enable and disable the progress bar, respectively.
For multi-chain sampling, progress
can take the following values:
:none
or false
: no progress bar:overall
or true
: creates one overall progress bar for all chains:perchain
: creates one overall progress bar, plus one extra progress bar per chain (note that this can lead to visual clutter if you have many chains)If you want to globally enable or disable the progress bar, you can use:
(This handily also disables progress logging for the rest of this document.)
For NUTS in particular, you can also specify verbose=false
to disable the “Found initial step size” info message.
Like many other Julia functions, a Random.AbstractRNG
object can be passed as the first argument to sample()
to ensure reproducibility of results.
(true, true)
Alternatively, you can set the global RNG using Random.seed!()
, although we recommend this less as it modifies global state.
(true, true)
The outputs of pseudorandom number generators in the standard Random
library are not guaranteed to be the same across different Julia versions or platforms. If you require absolute reproducibility, you should use the StableRNGs.jl package.
By default, the results of MCMC sampling are bundled up in an MCMCChains.Chains
object.
Chains{Float64, AxisArrays.AxisArray{Float64, 3, Array{Float64, 3}, Tuple{AxisArrays.Axis{:iter, StepRange{Int64, Int64}}, AxisArrays.Axis{:var, Vector{Symbol}}, AxisArrays.Axis{:chain, UnitRange{Int64}}}}, Missing, @NamedTuple{parameters::Vector{Symbol}, internals::Vector{Symbol}}, @NamedTuple{varname_to_symbol::OrderedDict{AbstractPPL.VarName, Symbol}, start_time::Float64, stop_time::Float64}}
If you wish to use a different chain format provided in another package, you can specify the chain_type
keyword argument. You should refer to the documentation of the respective package for exact details.
Another situation where specifying chain_type
can be useful is when you want to obtain the raw MCMC outputs as a vector of transitions. This can be used for profiling or debugging purposes (often, chain construction can take a surprising amount of time compared to sampling, especially for very simple models). To do so, you can use chain_type=Any
(i.e., do not convert the output to any specific chain format):
In Turing.jl, initial parameters for MCMC sampling can be specified using the initial_params
keyword argument.
For single-chain sampling, the AbstractMCMC interface generally expects that you provide a completely flattened vector of parameters.
Note that a number of samplers use warm-up steps by default (see the Thinning and Warmup section below), so chn[:param][1]
may not correspond to the exact initial parameters you provided. MH()
does not do this, which is why we use it here.
Note that for Turing models, the use of Vector
can be extremely error-prone as the order of parameters in the flattened vector is not always obvious (especially if there are parameters with non-trivial types). In general, parameters should be provided in the order they are defined in the model. A relatively ‘safe’ way of obtaining parameters in the correct order is to first generate a VarInfo
, and then linearise that:
2-element Vector{Float64}:
-0.3483855030130556
-0.5612681142085717
To avoid this situation, you can also use NamedTuple
to specify initial parameters.
(-6.0, 2.0)
This works even for parameters with more complex types.
For multiple-chain sampling, the initial_params
keyword argument should be a vector with length equal to the number of chains being sampled. Each element of this vector should be the initial parameters for the corresponding chain, as described above. Thus, for example, a vector of vectors, or a vector of NamedTuple
s, can be used. If you want to use the same initial parameters for all chains, you can use fill
:
┌ Warning: Only a single thread available: MCMC chains are not sampled in parallel └ @ AbstractMCMC ~/.julia/packages/AbstractMCMC/nQLlh/src/sample.jl:410
([1.0, 1.0, 1.0], [-5.0, -5.0, -5.0])
In Turing v0.41, instead of providing initial parameters, users will have to provide what is conceptually an initialisation strategy. The keyword argument is still initial_params
, but the permitted values (for single-chain sampling) will either be:
InitFromPrior()
: generate initial parameters by sampling from the priorInitFromUniform(lower, upper)
: generate initial parameters by sampling uniformly from the given bounds in linked spaceInitFromParams(namedtuple_or_dict)
: use the provided initial parameters, supplied either as a NamedTuple
or a Dict{<:VarName}
Initialisation with Vector
will be fully removed due to its inherent ambiguity. Initialisation with a raw NamedTuple
will still be supported (it will simply be wrapped in InitFromParams()
); but we expect to remove this eventually, so it will be more future-proof to use InitFromParams()
directly.
For multiple chains, the same as above applies: the initial_params
keyword argument should be a vector of initialisation strategies, one per chain.
By default, MCMC sampling starts from scratch, using the initial parameters provided. You can, however, resume sampling from a previous chain. This is useful to, for example, perform sampling in batches, or to inspect intermediate results.
Firstly, the previous chain must have been run using the save_state=true
argument.
For MCMCChains.Chains
, this results in the final sampler state being stored inside the chain metadata. You can access it using DynamicPPL.loadstate
:
Turing.Inference.MHState{VarInfo{@NamedTuple{x::DynamicPPL.Metadata{Dict{VarName{:x, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{VarName{:x, typeof(identity)}}, Vector{Float64}}, y::DynamicPPL.Metadata{Dict{VarName{:y, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{VarName{:y, typeof(identity)}}, Vector{Float64}}}, DynamicPPL.AccumulatorTuple{3, @NamedTuple{LogPrior::LogPriorAccumulator{Float64}, LogJacobian::LogJacobianAccumulator{Float64}, LogLikelihood::LogLikelihoodAccumulator{Float64}}}}, Float64}
You can also directly access the saved sampler state with chn1.info.samplerstate
, but we recommend not using this as it relies on the internal structure of MCMCChains.Chains
.
Sampling can then be resumed from this state by providing it as the initial_state
keyword argument.
Chains MCMC chain (5×5×1 Array{Float64, 3}):
Iterations = 1:1:5
Number of chains = 1
Samples per chain = 5
Wall duration = 0.07 seconds
Compute duration = 0.07 seconds
parameters = x, y
internals = lp, logprior, loglikelihood
Use `describe(chains)` for summary statistics and quantiles.
Note that the exact format saved in chn.info.samplerstate
, and that expected by initial_state
, depends on the invocation of sample
used. For single-chain sampling, the saved state, and the required initial state, is just a single sampler state. For multiple-chain sampling, it is a vector of states, one per chain.
This means that, for example, after sampling a single chain, you could sample three chains that branch off from that final state:
┌ Warning: Only a single thread available: MCMC chains are not sampled in parallel └ @ AbstractMCMC ~/.julia/packages/AbstractMCMC/nQLlh/src/sample.jl:410
Chains MCMC chain (5×5×3 Array{Float64, 3}):
Iterations = 1:1:5
Number of chains = 3
Samples per chain = 5
Wall duration = 0.04 seconds
Compute duration = 0.02 seconds
parameters = x, y
internals = lp, logprior, loglikelihood
Use `describe(chains)` for summary statistics and quantiles.
The initial_state
and initial_params
keyword arguments are mutually exclusive. If both are provided, initial_params
will be silently ignored.
(-0.29864819524237707, 2.2292859419961677)
In general, the saved state will contain a set of parameters (which will be the last parameters in the previous chain). However, the saved state not only specifies parameters but also other internal variables required by the sampler. For example, the MH state contains a cached log-density of the current parameters, which is later used for calculating the acceptance ratio.
Finally, note that the first sample in the resumed chain will not be the same as the last sample in the previous chain; it will be the sample immediately after that.
The num_warmup
and discard_initial
keyword arguments can be used to control MCMC warmup. Both of these are integers, and respectively specify the number of warmup steps to perform, and the number of iterations at the start of the chain to discard. Note that the value of discard_initial
should also include the num_warmup
steps if you want the warmup steps to be discarded.
Here are some examples of how these two keyword arguments interact:
num_warmup= |
discard_initial= |
Description |
---|---|---|
10 | 10 | Perform 10 warmup steps, discard them; the chain starts from the first non-warmup step |
10 | 15 | Perform 10 warmup steps, discard them and the next 5 steps; the chain starts from the 6th non-warmup step |
10 | 5 | Perform 10 warmup steps, discard the first 5; the chain will contain 5 warmup steps followed by the rest of the chain |
0 | 10 | No warmup steps, discard the first 10 steps; the chain starts from the 11th step |
0 | 0 | No warmup steps, do not discard any steps; the chain starts from the 1st step (corresponding to the initial parameters) |
Each sampler has its own default value for num_warmup
, but discard_initial
always defaults to num_warmup
.
Warmup steps and ‘regular’ non-warmup steps differ in that warmup steps call AbstractMCMC.step_warmup
, whereas regular steps call AbstractMCMC.step
. For all the samplers defined in Turing, these two functions are identical; however, they may in general differ for other samplers. Please consult the documentation of the respective sampler for details.
A thinning factor can be specified using the thinning
keyword argument. For example, thinning=10
will keep every tenth sample, discarding the other nine.
Note that thinning is not applied to the first discard_initial
samples; it is only applied to the remaining samples. Thus, for example, if you use discard_initial=50
and thinning=10
, the chain will contain samples 51, 61, 71, and so on.
DynamicPPL by default performs a number of checks on the model before any sampling is done. This catches a number of potential errors in a model, such as having repeated variables (see the DynamicPPL documentation for details).
If you wish to disable this you can pass check_model=false
to sample()
.
The callback
keyword argument can be used to specify a function that is called at the end of each sampler iteration. This function should have the signature callback(rng, model, sampler, sample, iteration::Int; kwargs...)
.
If you are performing multi-chain sampling, kwargs
will additionally contain chain_number::Int
, which ranges from 1 to the number of chains.
The TuringCallbacks.jl package contains a TensorBoardCallback
, which can be used to obtain live progress visualisations using TensorBoard.
Finally, please note that for samplers which use automatic differentiation (e.g., HMC and NUTS), the AD type should be specified in the sampler constructor itself, rather than as a keyword argument to sample()
.
In other words, this is correct:
┌ Info: Found initial step size └ ϵ = 1.6
and not this: