API: Turing.Inference
Turing.Inference.CSMC — Type
CSMC(...)
Equivalent to PG.
Turing.Inference.ESS — Type
ESSElliptical slice sampling algorithm.
Examples
julia> @model function gdemo(x)
m ~ Normal()
x ~ Normal(m, 0.5)
end
gdemo (generic function with 2 methods)
julia> sample(gdemo(1.0), ESS(), 1_000) |> mean
Mean
│ Row │ parameters │ mean │
│ │ Symbol │ Float64 │
├─────┼────────────┼──────────┤
│ 1 │ m │ 0.824853 │Turing.Inference.Emcee — Type
Emcee(n_walkers::Int, stretch_length=2.0)Affine-invariant ensemble sampling algorithm.
Reference
Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer. Publications of the Astronomical Society of the Pacific, 125 (925), 306. https://doi.org/10.1086/670067
Turing.Inference.ExternalSampler — Type
ExternalSampler{Unconstrained,S<:AbstractSampler,AD<:ADTypes.AbstractADType}Represents a sampler that does not have a custom implementation of AbstractMCMC.step(rng, ::DynamicPPL.Model, spl).
The Unconstrained type-parameter is to indicate whether the sampler requires unconstrained space.
Fields
sampler::AbstractMCMC.AbstractSampler: the sampler to wrapadtype::ADTypes.AbstractADType: the automatic differentiation (AD) backend to use
Turing.jl's interface for external samplers
If you implement a new MySampler <: AbstractSampler and want it to work with Turing.jl models, there are two options:
Directly implement the
AbstractMCMC.stepmethods forDynamicPPL.Model. That is to say, implementAbstractMCMC.step(rng::Random.AbstractRNG, model::DynamicPPL.Model, sampler::MySampler; kwargs...)and related methods. This is the most powerful option and is what Turing.jl's in-house samplers do. Implementing this means that you can directly callsample(model, MySampler(), N).Implement a generic
AbstractMCMC.stepmethod forAbstractMCMC.LogDensityModel(the same signature as above except thatmodel::AbstractMCMC.LogDensityModel). This struct wraps an object that obeys the LogDensityProblems.jl interface, so yourstepimplementation does not need to know anything about Turing.jl or DynamicPPL.jl. To use this with Turing.jl, you will need to wrap your sampler:sample(model, externalsampler(MySampler()), N).
This section describes the latter.
MySampler must implement the following methods:
AbstractMCMC.step(the main function for taking a step in MCMC sampling; this is documented in AbstractMCMC.jl). This function must return a tuple of two elements, a 'transition' and a 'state'.AbstractMCMC.getparams(external_state): How to extract the parameters from the state returned by your sampler (i.e., the second return value ofstep). For your sampler to work with Turing.jl, this function should return a Vector of parameter values. Note that this function does not need to perform any linking or unlinking; Turing.jl will take care of this for you. You should return the parameters exactly as your sampler sees them.AbstractMCMC.getstats(external_state): Extract sampler statistics corresponding to this iteration from the state returned by your sampler (i.e., the second return value ofstep). For your sampler to work with Turing.jl, this function should return aNamedTuple. If there are no statistics to return, returnNamedTuple().Note that
getstatsshould not include log-probabilities as these will be recalculated by Turing automatically for you.
Notice that both of these functions take the state as input, not the transition. In other words, the transition is completely useless for the external sampler interface. This is in line with long-term plans for removing transitions from AbstractMCMC.jl and only using states.
There are a few more optional functions which you can implement to improve the integration with Turing.jl:
AbstractMCMC.requires_unconstrained_space(::MySampler): If your sampler requires unconstrained space, you should returntrue. This tells Turing to perform linking on the VarInfo before evaluation, and ensures that the parameter values passed to your sampler will always be in unconstrained (Euclidean) space.Turing.Inference.isgibbscomponent(::MySampler): If you want to disallow your sampler from a component in Turing's Gibbs sampler, you should make this evaluate tofalse. Note that the default istrue, so you should only need to implement this in special cases.
Turing.Inference.Gibbs — Type
GibbsA type representing a Gibbs sampler.
Constructors
Gibbs needs to be given a set of pairs of variable names and samplers. Instead of a single variable name per sampler, one can also give an iterable of variables, all of which are sampled by the same component sampler.
Each variable name can be given as either a Symbol or a VarName.
Some examples of valid constructors are:
Gibbs(:x => NUTS(), :y => MH())
Gibbs(@varname(x) => NUTS(), @varname(y) => MH())
Gibbs((@varname(x), :y) => NUTS(), :z => MH())Fields
varnames::NTuple{N, AbstractVector{<:AbstractPPL.VarName}} where N: varnames representing variables for each samplersamplers::NTuple{N, Any} where N: samplers for each entry invarnames
Turing.Inference.GibbsConditional — Type
GibbsConditional(get_cond_dists)A Gibbs component sampler that samples variables according to user-provided analytical conditional posterior distributions.
When using Gibbs sampling, sometimes one may know the analytical form of the posterior for a given variable, given the conditioned values of the other variables. In such cases one can use GibbsConditional as a component sampler to to sample from these known conditionals directly, avoiding any MCMC methods. One does so with
sampler = Gibbs(
(@varname(var1), @varname(var2)) => GibbsConditional(get_cond_dists),
other samplers go here...
)Here get_cond_dists(vnt::VarNamedTuple) should be a function that takes a VarNamedTuple that contains the values of all other variables (apart from var1 and var2), and returns the conditional posterior distributions for var1 and var2.
VarNamedTuples behave very similarly to Dict{VarName,Any}s, but are more efficient and more general: you can obtain values simply by using, e.g. vnt[@varname(var3)]. See https://turinglang.org/docs/usage/varnamedtuple/ for more details on VarNamedTuples.
You may, of course, have any number of variables being sampled as a block in this manner, we only use two as an example.
The return value of get_cond_dists(vnt) should be one of the following:
- A single
Distribution, if only one variable is being sampled. - A
VarNamedTupleofDistributions, which represents a mapping from variable names to their conditional posteriors. Please see the documentation linked above for information on how to constructVarNamedTuples.
For convenience, we also allow the following return values (which are internally converted into a VarNamedTuple):
- A
NamedTupleofDistributions, which is like theAbstractDictcase but can be used if all the variable names are singleSymbols, e.g.:(; var1=dist1, var2=dist2). - An
AbstractDict{<:VarName,<:Distribution}that maps the variables being sampled to their conditional posteriors E.g.Dict(@varname(var1) => dist1, @varname(var2) => dist2).
Note that the AbstractDict case is likely to incur a performance penalty; we recommend using VarNamedTuples directly.
Examples
using Turing
# Define a model
@model function inverse_gdemo(x)
precision ~ Gamma(2, inv(3))
std = sqrt(1 / precision)
m ~ Normal(0, std)
for i in eachindex(x)
x[i] ~ Normal(m, std)
end
end
# Define analytical conditionals. See
# https://en.wikipedia.org/wiki/Conjugate_prior#When_likelihood_function_is_a_continuous_distribution
function cond_precision(vnt)
a = 2.0
b = 3.0
m = vnt[@varname(m)]
x = vnt[@varname(x)]
n = length(x)
a_new = a + (n + 1) / 2
b_new = b + sum(abs2, x .- m) / 2 + m^2 / 2
return Gamma(a_new, 1 / b_new)
end
function cond_m(vnt)
precision = vnt[@varname(precision)]
x = vnt[@varname(x)]
n = length(x)
m_mean = sum(x) / (n + 1)
m_var = 1 / (precision * (n + 1))
return Normal(m_mean, sqrt(m_var))
end
# Sample using GibbsConditional
model = inverse_gdemo([1.0, 2.0, 3.0])
chain = sample(model, Gibbs(
:precision => GibbsConditional(cond_precision),
:m => GibbsConditional(cond_m)
), 1000)Turing.Inference.GibbsContext — Type
GibbsContext(target_varnames, global_varinfo, context)A context used in the implementation of the Turing.jl Gibbs sampler.
There will be one GibbsContext for each iteration of a component sampler.
target_varnames is a a tuple of VarNames that the current component sampler is sampling. For those VarNames, GibbsContext will just pass tilde_assume!! calls to its child context. For other variables, their values will be fixed to the values they have in global_varinfo.
Fields
target_varnames: the VarNames being sampled
global_varinfo: aRefto the globalAbstractVarInfoobject that holds values for all variables, both those fixed and those being sampled. We use aRefbecause this field may need to be updated if new variables are introduced.
context: the child context that tilde calls will eventually be passed onto.
Turing.Inference.HMC — Type
HMC(ϵ::Float64, n_leapfrog::Int; adtype::ADTypes.AbstractADType = AutoForwardDiff())Hamiltonian Monte Carlo sampler with static trajectory.
Arguments
ϵ: The leapfrog step size to use.n_leapfrog: The number of leapfrog steps to use.adtype: The automatic differentiation (AD) backend. If not specified,ForwardDiffis used, with itschunksizeautomatically determined.
Usage
HMC(0.05, 10)Tips
If you are receiving gradient errors when using HMC, try reducing the leapfrog step size ϵ, e.g.
# Original step size
sample(gdemo([1.5, 2]), HMC(0.1, 10), 1000)
# Reduced step size
sample(gdemo([1.5, 2]), HMC(0.01, 10), 1000)Turing.Inference.HMCDA — Type
HMCDA(
n_adapts::Int, δ::Float64, λ::Float64; ϵ::Float64 = 0.0;
adtype::ADTypes.AbstractADType = AutoForwardDiff(),
)Hamiltonian Monte Carlo sampler with Dual Averaging algorithm.
Usage
HMCDA(200, 0.65, 0.3)Arguments
n_adapts: Numbers of samples to use for adaptation.δ: Target acceptance rate. 65% is often recommended.λ: Target leapfrog length.ϵ: Initial step size; 0 means automatically search by Turing.adtype: The automatic differentiation (AD) backend. If not specified,ForwardDiffis used, with itschunksizeautomatically determined.
Reference
For more information, please view the following paper (arXiv link):
Hoffman, Matthew D., and Andrew Gelman. "The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." Journal of Machine Learning Research 15, no. 1 (2014): 1593-1623.
Turing.Inference.InitFromProposals — Type
InitFromProposals(proposals::VarNamedTuple, verbose::Bool)An initialisation strategy that samples variables from user-defined proposal distributions. If a proposal distribution is not found in proposals, then we defer to sampling from the prior.
Turing.Inference.LinkedRW — Type
LinkedRW(cov_matrix)Define a random-walk proposal in linked space with the given covariance matrix. Note that the size of the covariance matrix must correspond exactly to the size of the variable in linked space.
LinkedRW(variance::Real)If a Real variance is provided, LinkedRW will just generate a covariance matrix of variance * LinearAlgebra.I.
Turing.Inference.MH — Type
MH(vn1 => proposal1, vn2 => proposal2, ...)Construct a Metropolis-Hastings algorithm.
Each argument proposal can be
- Blank (i.e.
MH()), in which caseMHdefaults to using the prior for each parameter as the proposal distribution. - A mapping of
VarNames to aDistribution,LinkedRW, or a generic callable that defines a conditional proposal distribution.
MH(cov_matrix)Construct a Metropolis-Hastings algorithm that performs random-walk sampling in linked space, with proposals drawn from a multivariate normal distribution with the given covariance matrix.
Examples
Consider the model below:
@model function gdemo()
s ~ InverseGamma(2,3)
m ~ Normal(0, sqrt(s))
1.5 ~ Normal(m, sqrt(s))
2.0 ~ Normal(m, sqrt(s))
endThe default constructor, MH(), uses the prior distributions as proposals. So, new proposals are obtained by sampling s from InverseGamma(2,3) and m from Normal(0, sqrt(s)).
spl = MH()Alternatively, a mapping of variable names to proposal distributions can be provided. This implies the use of static proposals for each variable. If a variable is not specified, its prior distribution is used as the proposal.
# Use a static proposal for s (which happens to be the same as the prior) and a static
# proposal for m (note that this isn't a random walk proposal).
spl = MH(
# This happens to be the same as the prior
@varname(s) => InverseGamma(2, 3),
# This is different from the prior
@varname(m) => Normal(0, 1),
)If the VarName of interest is a single symbol, you can also use a Symbol instead.
spl = MH(
:s => InverseGamma(2, 3),
:m => Normal(0, 1),
)You can also use a callable to define a proposal that is conditional on the current values. The callable must accept a single argument, which is a DynamicPPL.VarNamedTuple that holds all the values of the parameters from the previous step. You can obtain the value of a specific parameter by indexing into this VarNamedTuple using a VarName (note that symbol indexing is not supported). The callable must then return a Distribution from which to draw the proposal.
In general, there is no way for Turing to reliably detect whether a proposal is meant to be a callable or not, since callable structs may have any type. Hence, any proposal that is not a distribution is assumed to be a callable.
spl = MH(
# This is a static proposal (same as above).
@varname(s) => InverseGamma(2, 3),
# This is a conditional proposal, which proposes m from a normal
# distribution centred at the current value of m, with a standard
# deviation of 0.5.
@varname(m) => (vnt -> Normal(vnt[@varname(m)], 0.5)),
)Note that when using conditional proposals, the values obtained by indexing into the VarNamedTuple are always in untransformed space, which are constrained to the support of the distribution. Sometimes, you may want to define a random-walk proposal in unconstrained (i.e. 'linked') space. For this, you can use LinkedRW as a proposal, which takes a covariance matrix as an argument:
using LinearAlgebra: Diagonal
spl = MH(
@varname(s) => InverseGamma(2, 3),
@varname(m) => LinkedRW(Diagonal([0.25]))
)In the above example, LinkedRW(Diagonal([0.25])) defines a random-walk proposal for m in linked space. This is in fact the same as the conditional proposal above, because m is already unconstrained, and so the unconstraining transformation is the identity.
However, s is constrained to be positive, and so using a LinkedRW proposal for s would be different from using a normal proposal in untransformed space (LinkedRW will ensure that the proposals for s always remain positive in untransformed space).
spl = MH(
@varname(s) => LinkedRW(Diagonal([0.5])),
@varname(m) => LinkedRW(Diagonal([0.25])),
)Finally, providing just a single covariance matrix will cause MH to perform random-walk sampling in linked space with proposals drawn from a multivariate normal distribution. All variables are linked in this case. The provided matrix must be positive semi-definite and square. This example is therefore equivalent to the previous one:
# Providing a custom variance-covariance matrix
spl = MH(
[0.50 0;
0 0.25]
)Turing.Inference.NUTS — Type
NUTS(n_adapts::Int, δ::Float64; max_depth::Int=10, Δ_max::Float64=1000.0, init_ϵ::Float64=0.0; adtype::ADTypes.AbstractADType=AutoForwardDiff()No-U-Turn Sampler (NUTS) sampler.
Usage:
NUTS() # Use default NUTS configuration.
NUTS(1000, 0.65) # Use 1000 adaption steps, and target accept ratio 0.65.Arguments:
n_adapts::Int: The number of samples to use with adaptation.δ::Float64: Target acceptance rate for dual averaging.max_depth::Int: Maximum doubling tree depth.Δ_max::Float64: Maximum divergence during doubling tree.init_ϵ::Float64: Initial step size; 0 means automatically searching using a heuristic procedure.adtype::ADTypes.AbstractADType: The automatic differentiation (AD) backend. If not specified,ForwardDiffis used, with itschunksizeautomatically determined.
Turing.Inference.PG — Type
struct PG{R} <: Turing.Inference.ParticleInferenceParticle Gibbs sampler.
Fields
nparticles::Int64: Number of particles.resampler::Any: Resampling algorithm.
Turing.Inference.PG — Method
PG(n, [resampler = AdvancedPS.ResampleWithESSThreshold()]) PG(n, [resampler = AdvancedPS.resample_systematic, ]threshold)
Create a Particle Gibbs sampler of type PG with n particles.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
Turing.Inference.PolynomialStepsize — Method
PolynomialStepsize(a[, b=0, γ=0.55])Create a polynomially decaying stepsize function.
At iteration t, the step size is
\[a (b + t)^{-γ}.\]
Turing.Inference.Prior — Type
Prior()Algorithm for sampling from the prior.
Turing.Inference.ProduceLogLikelihoodAccumulator — Type
ProduceLogLikelihoodAccumulator{T<:Real} <: AbstractAccumulator
Exactly like LogLikelihoodAccumulator, but calls Libtask.produce on change of value.
Fields
logp::Real: the scalar log likelihood value
Turing.Inference.RepeatSampler — Type
RepeatSampler <: AbstractMCMC.AbstractSamplerA RepeatSampler is a container for a sampler and a number of times to repeat it.
Fields
sampler: The sampler to repeatnum_repeat: The number of times to repeat the sampler
Examples
repeated_sampler = RepeatSampler(sampler, 10)
AbstractMCMC.step(rng, model, repeated_sampler) # take 10 steps of `sampler`Turing.Inference.SGHMC — Type
SGHMC{AD}Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.
Fields
learning_rate::Realmomentum_decay::Realadtype::Any
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
Turing.Inference.SGHMC — Method
SGHMC(;
learning_rate::Real,
momentum_decay::Real,
adtype::ADTypes.AbstractADType = AutoForwardDiff(),
)Create a Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) sampler.
If the automatic differentiation (AD) backend adtype is not provided, ForwardDiff with automatically determined chunksize is used.
Reference
Tianqi Chen, Emily Fox, & Carlos Guestrin (2014). Stochastic Gradient Hamiltonian Monte Carlo. In: Proceedings of the 31st International Conference on Machine Learning (pp. 1683–1691).
Turing.Inference.SGLD — Type
SGLDStochastic gradient Langevin dynamics (SGLD) sampler.
Fields
stepsize::Any: Step size function.adtype::Any
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
Turing.Inference.SGLD — Method
SGLD(;
stepsize = PolynomialStepsize(0.01),
adtype::ADTypes.AbstractADType = AutoForwardDiff(),
)Stochastic gradient Langevin dynamics (SGLD) sampler.
By default, a polynomially decaying stepsize is used.
If the automatic differentiation (AD) backend adtype is not provided, ForwardDiff with automatically determined chunksize is used.
Reference
Max Welling & Yee Whye Teh (2011). Bayesian Learning via Stochastic Gradient Langevin Dynamics. In: Proceedings of the 28th International Conference on Machine Learning (pp. 681–688).
See also: PolynomialStepsize
Turing.Inference.SMC — Type
struct SMC{R} <: Turing.Inference.ParticleInferenceSequential Monte Carlo sampler.
Fields
resampler::Any
Turing.Inference.SMC — Method
SMC([resampler = AdvancedPS.ResampleWithESSThreshold()])
SMC([resampler = AdvancedPS.resample_systematic, ]threshold)Create a sequential Monte Carlo sampler of type SMC.
If the algorithm for the resampling step is not specified explicitly, systematic resampling is performed if the estimated effective sample size per particle drops below 0.5.
Turing.Inference._convert_initial_params — Method
_convert_initial_params(initial_params)Convert initial_params to a DynamicPPl.AbstractInitStrategy if it is not already one, or throw a useful error message.
Turing.Inference.build_values_vnt — Method
build_values_vnt(model::DynamicPPL.Model)Traverse the context stack of model and build a VarNamedTuple of all the variable values that are set in GibbsContext, ConditionContext, or FixedContext.
Turing.Inference.externalsampler — Method
externalsampler(
sampler::AbstractSampler;
adtype=AutoForwardDiff(),
unconstrained=AbstractMCMC.requires_unconstrained_space(sampler),
)Wrap a sampler so it can be used as an inference algorithm.
Arguments
sampler::AbstractSampler: The sampler to wrap.
Keyword Arguments
adtype::ADTypes.AbstractADType=ADTypes.AutoForwardDiff(): The automatic differentiation (AD) backend to use.unconstrained::Bool=AbstractMCMC.requires_unconstrained_space(sampler): Whether the sampler requires unconstrained space.
Turing.Inference.get_trace_local_resampled — Method
get_trace_local_resampled()Get the resample flag stored in the 'taped globals' of a Libtask.TapedTask.
This indicates whether new variable values should be sampled from the prior or not. For example, in SMC, this is true for all particles; in PG, this is true for all particles except the reference particle, whose trajectory must be reproduced exactly.
This function is meant to be called from inside the TapedTask itself.
Turing.Inference.get_trace_local_rng — Method
get_trace_local_rng()Get the RNG stored in the 'taped globals' of a Libtask.TapedTask, if one exists.
This function is meant to be called from inside the TapedTask itself.
Turing.Inference.get_trace_local_varinfo — Method
get_trace_local_varinfo()Get the varinfo stored in the 'taped globals' of a Libtask.TapedTask. This function is meant to be called from inside the TapedTask itself.
Turing.Inference.gibbs_initialstep_recursive — Function
Take the first step of MCMC for the first component sampler, and call the same function recursively on the remaining samplers, until no samplers remain. Return the global VarInfo and a tuple of initial states for all component samplers.
The step_function argument should always be either AbstractMCMC.step or AbstractMCMC.step_warmup.
Turing.Inference.gibbs_step_recursive — Function
Run a Gibbs step for the first varname/sampler/state tuple, and recursively call the same function on the tail, until there are no more samplers left.
The step_function argument should always be either AbstractMCMC.step or AbstractMCMC.step_warmup.
Turing.Inference.init_strategy — Method
Turing.Inference.init_strategy(spl::AbstractSampler)Get the default initialization strategy for a given sampler spl, i.e. how initial parameters for sampling are chosen if not specified by the user. By default, this is InitFromPrior(), which samples initial parameters from the prior distribution.
Turing.Inference.isgibbscomponent — Method
isgibbscomponent(spl::AbstractSampler)Return a boolean indicating whether spl is a valid component for a Gibbs sampler.
Defaults to true if no method has been defined for a particular sampler.
Turing.Inference.loadstate — Method
loadstate(chain::MCMCChains.Chains)Load the final state of the sampler from a MCMCChains.Chains object.
To save the final state of the sampler, you must use sample(...; save_state=true). If this argument was not used during sampling, calling loadstate will throw an error.
Turing.Inference.log_proposal_density — Method
log_proposal_density(
old_vi::DynamicPPL.AbstractVarInfo,
init_strategy_given_new::DynamicPPL.AbstractInitStrategy,
old_unspecified_priors::DynamicPPL.VarNamedTuple
)Calculate the ratio g(x|x') where g is the proposal distribution used to generate x (represented by old_vi), given the new state x'.
If the arguments are switched (i.e., new_vi is passed as the first argument, and init_strategy_given_old as the second), the function calculates g(x'|x).
The log-density of the proposal distribution is calculated by summing up the contributions from:
- any variables that have an explicit proposal in
init_strategy_given_new(i.e., those inspl.vns_with_proposal), which can be either static or conditional proposals; and - any variables that do not have an explicit proposal, for which we defer to its prior distribution.
Turing.Inference.make_conditional — Method
make_conditional(model, target_variables, varinfo)Return a new, conditioned model for a component of a Gibbs sampler.
Arguments
model::DynamicPPL.Model: The model to condition.target_variables::AbstractVector{<:VarName}: The target variables of the component sampler. These will not be conditioned.varinfo::DynamicPPL.AbstractVarInfo: Values for all variables in the model. All the values invarinfobut not intarget_variableswill be conditioned to the values they have invarinfo.
Returns
- A new model with the variables not in
target_variablesconditioned. - The
GibbsContextobject that will be used to condition the variables. This is necessary
because evaluation can mutate its global_varinfo field, which we need to access later.
Turing.Inference.match_linking!! — Method
match_linking!!(varinfo_local, prev_state_local, model)Make sure the linked/invlinked status of varinfo_local matches that of the previous state for this sampler. This is relevant when multiple samplers are sampling the same variables, and one might need it to be linked while the other doesn't.
Turing.Inference.set_trace_local_varinfo — Method
set_trace_local_varinfo(vi::AbstractVarInfo)Set the varinfo stored in Libtask's taped globals. The 'other' taped global in Libtask is expected to be an AdvancedPS.Trace.
Returns nothing.
This function is meant to be called from inside the TapedTask itself.
Turing.Inference.setparams_varinfo!! — Method
setparams_varinfo!!(model::DynamicPPL.Model, sampler::AbstractSampler, state, params::AbstractVarInfo)A lot like AbstractMCMC.setparams!!, but instead of taking a vector of parameters, takes an AbstractVarInfo object. Also takes the sampler as an argument. By default, falls back to AbstractMCMC.setparams!!(model, state, params[:]).