API

Module-wide re-exports

Turing.jl directly re-exports the entire public API of the following packages:

Please see the individual packages for their documentation.

Individual exports and re-exports

In this API documentation, for the sake of clarity, we have listed the module that actually defines each of the exported symbols. Note, however, that all of the following symbols are exported unqualified by Turing. That means, for example, you can just write

using Turing

@model function my_model() end

sample(my_model(), Prior(), 100)

instead of

DynamicPPL.@model function my_model() end

sample(my_model(), Turing.Inference.Prior(), 100)

even though Prior() is actually defined in the Turing.Inference module and @model in the DynamicPPL package.

Modelling

Exported symbolDocumentationDescription
@modelDynamicPPL.@modelDefine a probabilistic model
@varnameAbstractPPL.@varnameGenerate a VarName from a Julia expression
to_submodelDynamicPPL.to_submodelDefine a submodel
prefixDynamicPPL.prefixPrefix all variable names in a model with a given symbol
LogDensityFunctionDynamicPPL.LogDensityFunctionA struct containing all information about how to evaluate a model. Mostly for advanced users

Inference

Exported symbolDocumentationDescription
sampleStatsBase.sampleSample from a model
MCMCThreadsAbstractMCMC.MCMCThreadsRun MCMC using multiple threads
MCMCDistributedAbstractMCMC.MCMCDistributedRun MCMC using multiple processes
MCMCSerialAbstractMCMC.MCMCSerialRun MCMC using without parallelism

Samplers

Exported symbolDocumentationDescription
PriorTuring.Inference.PriorSample from the prior distribution
MHTuring.Inference.MHMetropolis–Hastings
EmceeTuring.Inference.EmceeAffine-invariant ensemble sampler
ESSTuring.Inference.ESSElliptical slice sampling
GibbsTuring.Inference.GibbsGibbs sampling
HMCTuring.Inference.HMCHamiltonian Monte Carlo
SGLDTuring.Inference.SGLDStochastic gradient Langevin dynamics
SGHMCTuring.Inference.SGHMCStochastic gradient Hamiltonian Monte Carlo
PolynomialStepsizeTuring.Inference.PolynomialStepsizeReturns a function which generates polynomially decaying step sizes
HMCDATuring.Inference.HMCDAHamiltonian Monte Carlo with dual averaging
NUTSTuring.Inference.NUTSNo-U-Turn Sampler
ISTuring.Inference.ISImportance sampling
SMCTuring.Inference.SMCSequential Monte Carlo
PGTuring.Inference.PGParticle Gibbs
CSMCTuring.Inference.CSMCThe same as PG
RepeatSamplerTuring.Inference.RepeatSamplerA sampler that runs multiple times on the same variable
externalsamplerTuring.Inference.externalsamplerWrap an external sampler for use in Turing

Variational inference

See the variational inference tutorial for a walkthrough on how to use these.

Exported symbolDocumentationDescription
viAdvancedVI.viPerform variational inference
ADVIAdvancedVI.ADVIConstruct an instance of a VI algorithm

Automatic differentiation types

These are used to specify the automatic differentiation backend to use. See the AD guide for more information.

Exported symbolDocumentationDescription
AutoForwardDiffADTypes.AutoForwardDiffForwardDiff.jl backend
AutoReverseDiffADTypes.AutoReverseDiffReverseDiff.jl backend
AutoMooncakeADTypes.AutoMooncakeMooncake.jl backend

Debugging

Turing.setprogress!Function
setprogress!(progress::Bool)

Enable progress logging in Turing if progress is true, and disable it otherwise.

source

Distributions

These distributions are defined in Turing.jl, but not in Distributions.jl.

Turing.FlatType
Flat()

The flat distribution is the improper distribution of real numbers that has the improper probability density function

\[f(x) = 1.\]

source
Turing.FlatPosType
FlatPos(l::Real)

The positive flat distribution with real-valued parameter l is the improper distribution of real numbers that has the improper probability density function

\[f(x) = \begin{cases} 0 & \text{if } x \leq l, \\ 1 & \text{otherwise}. \end{cases}\]

source
Turing.BinomialLogitType
BinomialLogit(n, logitp)

The Binomial distribution with logit parameterization characterizes the number of successes in a sequence of independent trials.

It has two parameters: n, the number of trials, and logitp, the logit of the probability of success in an individual trial, with the distribution

\[P(X = k) = {n \choose k}{(\text{logistic}(logitp))}^k (1 - \text{logistic}(logitp))^{n-k}, \quad \text{ for } k = 0,1,2, \ldots, n.\]

See also: Binomial

source
Turing.OrderedLogisticType
OrderedLogistic(η, c::AbstractVector)

The ordered logistic distribution with real-valued parameter η and cutpoints c has the probability mass function

\[P(X = k) = \begin{cases} 1 - \text{logistic}(\eta - c_1) & \text{if } k = 1, \\ \text{logistic}(\eta - c_{k-1}) - \text{logistic}(\eta - c_k) & \text{if } 1 < k < K, \\ \text{logistic}(\eta - c_{K-1}) & \text{if } k = K, \end{cases}\]

where K = length(c) + 1.

source
Turing.LogPoissonType
LogPoisson(logλ)

The Poisson distribution with logarithmic parameterization of the rate parameter describes the number of independent events occurring within a unit time interval, given the average rate of occurrence $\exp(\log\lambda)$.

The distribution has the probability mass function

\[P(X = k) = \frac{e^{k \cdot \log\lambda}}{k!} e^{-e^{\log\lambda}}, \quad \text{ for } k = 0,1,2,\ldots.\]

See also: Poisson

source

Tools to work with distributions

Exported symbolDocumentationDescription
ILinearAlgebra.IIdentity matrix
filldistDistributionsAD.filldistCreate a product distribution from a distribution and integers
arraydistDistributionsAD.arraydistCreate a product distribution from an array of distributions
NamedDistDynamicPPL.NamedDistA distribution that carries the name of the variable

Predictions

Exported symbolDocumentationDescription
predictStatsAPI.predictGenerate samples from posterior predictive distribution

Querying model probabilities and quantities

Please see the generated quantities and probability interface guides for more information.

Exported symbolDocumentationDescription
returnedDynamicPPL.returnedCalculate additional quantities defined in a model
pointwise_loglikelihoodsDynamicPPL.pointwise_loglikelihoodsCompute log likelihoods for each sample in a chain
logpriorDynamicPPL.logpriorCompute log prior probability
logjointDynamicPPL.logjointCompute log joint probability
conditionAbstractPPL.conditionCondition a model on data
deconditionAbstractPPL.deconditionRemove conditioning on data
conditionedDynamicPPL.conditionedReturn the conditioned values of a model
fixDynamicPPL.fixFix the value of a variable
unfixDynamicPPL.unfixUnfix the value of a variable
OrderedDictOrderedCollections.OrderedDictAn ordered dictionary

Point estimates

See the mode estimation tutorial for more information.

Exported symbolDocumentationDescription
maximum_a_posterioriTuring.Optimisation.maximum_a_posterioriFind a MAP estimate for a model
maximum_likelihoodTuring.Optimisation.maximum_likelihoodFind a MLE estimate for a model
MAPTuring.Optimisation.MAPType to use with Optim.jl for MAP estimation
MLETuring.Optimisation.MLEType to use with Optim.jl for MLE estimation