API: Turing.Optimisation
DynamicPPL.InitFromParams — Type
InitFromParams(
m::ModeResult,
fallback::Union{AbstractInitStrategy,Nothing}=InitFromPrior()
)Initialize a model from the parameters stored in a ModeResult. The fallback is used if some parameters are missing from the ModeResult.
Turing.Optimisation.InitWithConstraintCheck — Type
InitWithConstraintCheck(lb, ub, actual_strategy) <: AbstractInitStrategyInitialise parameters with actual_strategy, but check that the initialised parameters satisfy any bounds in lb and ub.
Turing.Optimisation.MAP — Type
MAP <: ModeEstimatorConcrete type for maximum a posteriori estimation.
Turing.Optimisation.MLE — Type
MLE <: ModeEstimatorConcrete type for maximum likelihood estimation.
Turing.Optimisation.ModeEstimator — Type
ModeEstimatorAn abstract type to mark whether mode estimation is to be done with maximum a posteriori (MAP) or maximum likelihood estimation (MLE).
Turing.Optimisation.ModeResult — Type
ModeResult{
E<:ModeEstimator,
P<:DynamicPPL.VarNamedTuple,
LP<:Real,
L<:DynamicPPL.LogDensityFunction,
O<:Any,
}A wrapper struct to store various results from a MAP or MLE estimation.
Fields
estimator::Turing.Optimisation.ModeEstimator: The type of mode estimation (MAP or MLE).params::VarNamedTuple: Dictionary of parameter values. These values are always provided in unlinked space, even if the optimisation was run in linked space.lp::Real: The final log likelihood or log joint, depending on whetherMAPorMLEwas run. Note that this is the actual log probability of the parameters, i.e., not negated; we do need a negated log probability to run the optimisation itself (since it is a maximisation), but this is handled in a way that is entirely transparent to the user.linked::Bool: Whether the optimisation was done in a transformed space.ldf::LogDensityFunction: The LogDensityFunction used to calculate the output. Note that this LogDensityFunction calculates the actual (non-negated) log density. It should hold thatm.lp == LogDensityProblems.logdensity(m.ldf, m.optim_result.u)for a ModeResultm.The objective function used for minimisation is equivalent to `p -> -LogDensityProblems.logdensity(m.ldf, p)`). Note, however, that `p` has to be provided as a vector in linked or unlinked space depending on the value of `m.linked`. If `m.linked` is true, to evaluate the log-density using unlinked parameters, you can use `logjoint(m.ldf.model, params)` where `params` is a NamedTuple or Dictionary of unlinked parameters.optim_result::Any: The stored optimiser results.
Turing.Optimisation.ModeResult — Method
ModeResult(
log_density::DynamicPPL.LogDensityFunction,
solution::SciMLBase.OptimizationSolution,
linked::Bool,
estimator::ModeEstimator,
)Create a ModeResult for a given log_density objective and a solution given by solve. The linked argument indicates whether the optimization was done in a transformed space.
Optimization.solve returns its own result type. This function converts that into the richer format of ModeResult. It also takes care of transforming them back to the original parameter space in case the optimization was done in a transformed space.
StatsAPI.coeftable — Method
StatsBase.coeftable(m::ModeResult; level::Real=0.95, numerrors_warnonly::Bool=true)Return a table with coefficients and related statistics of the model. level determines the level for confidence intervals (by default, 95%).
In case the numerrors_warnonly argument is true (the default) numerical errors encountered during the computation of the standard errors will be caught and reported in an extra "Error notes" column.
StatsAPI.informationmatrix — Method
StatsBase.informationmatrix(
m::ModeResult;
adtype::ADTypes.AbstractADType=ADTypes.AutoForwardDiff()
)Calculate the Fisher information matrix for the mode result m. This is the negative Hessian of the log-probability at the mode.
The Hessian is calculated using automatic differentiation with the specified adtype. By default this is ADTypes.AutoForwardDiff(). In general, however, it may be more efficient to use forward-over-reverse AD when the model has many parameters. This can be specified using DifferentiationInterface.SecondOrder(outer, inner); please consult the DifferentiationInterface.jl documentation for more details.
Turing.Optimisation.estimate_mode — Function
estimate_mode(
[rng::Random.AbstractRNG,]
model::DynamicPPL.Model,
estimator::ModeEstimator,
solver=OptimizationOptimJL.LBFGS();
link::Bool=true,
initial_params=DynamicPPL.InitFromPrior(),
lb::Union{NamedTuple,AbstractDict{<:VarName,<:Any}}=(;),
ub::Union{NamedTuple,AbstractDict{<:VarName,<:Any}}=(;),
adtype::AbstractADType=AutoForwardDiff(),
check_model::Bool=true,
check_constraints_at_runtime::Bool=true,
kwargs...,
)Find the mode of the probability distribution of a model.
Under the hood this function constructs a LogDensityFunction and calls Optimization.solve on it.
Note that the optimisation interface that Turing exposes is a more high-level interface which is tailored towards probabilistic modelling, so not every option available in Optimization.jl is supported here. In particular, Turing's optimisation interface allows you to:
Provide initial parameters, lower bounds, and upper bounds as mappings of
VarNames to values in original (unlinked space).Choose whether to run the optimisation in linked or unlinked space (by default linked). Linked space means that parameters are transformed to unconstrained Euclidean space, meaning that you can avoid hard edges in the optimisation landscape (i.e., logpdf suddenly dropping to
-Infoutside the support of a variable). It also avoids cases where parameters may not be independent, e.g.,x ~ Dirichlet(...)where the components ofxmust sum to 1. Optimisation in linked space is enabled by default.
Turing is responsible for 'translating' these user-friendly specifications into vectorised forms (of initial parameters, lower bounds, and upper bounds) that Optimization.jl can work with.
However, there are cases where this translation can fail or otherwise be ill-defined (specifically when considering constraints). For example, recall that constraints are supplied in unlinked space, but the optimisation is run by default in linked space. Sometimes it is possible to translate constraints from unlinked space to linked space: for example, for x ~ Beta(2, 2), lower bounds in unlinked space can be translated to lower bounds in linked space via the logit transform (specificallly, by calling Bijectors.VectorBijectors.to_linked_vec(Beta(2, 2)).
However, if a user supplies a constraint on a Dirichlet variable, there is no well-defined mapping of unlinked constraints to linked space. In such cases, Turing will throw an error (although you can still run in unlinked space). Generic, non-box constraints are also not possible to correctly support, so Turing's optimisation interface refuses to support them.
See https://github.com/TuringLang/Turing.jl/issues/2634 for more discussion on the interface and what it supports.
If you need these capabilities, we suggest that you create your own LogDensityFunction and call Optimization.jl directly on it.
Arguments
rng::Random.AbstractRNG: an optional random number generator. This is used only for parameter initialisation; it does not affect the actual optimisation process.model::DynamicPPL.Model: The model for which to estimate the mode.estimator::ModeEstimator: Can be eitherMLE()for maximum likelihood estimation orMAP()for maximum a posteriori estimation.solver=OptimizationOptimJL.LBFGS(): The optimization algorithm to use. The default solver is L-BFGS, which is a good general-purpose solver that supports box constraints. You can also use any solver supported by Optimization.jl.
Keyword arguments
link::Bool=true: if true, the model parameters are transformed to an unconstrained space for the optimisation. This is generally recommended as it avoids hard edges (i.e., returning a probability ofInfoutside the support of the parameters), which can lead to NaN's or incorrect results. Note that the returned parameter values are always in the original (unlinked) space, regardless of whetherlinkis true or false.initial_params::DynamicPPL.AbstractInitStrategy=DynamicPPL.InitFromPrior(): an initialisation strategy for the parameters. By default, parameters are initialised by generating from the prior. The initialisation strategy will always be augmented by any contraints provided vialbandub, in that the initial parameters will be guaranteed to lie within the provided bounds.lb::Union{NamedTuple,AbstractDict{<:VarName,<:Any}}=(;): a mapping from variable names to lower bounds for the optimisation. The bounds should be provided in the original (unlinked) space. Not all constraints are supported by Turing's optimisation interface. See details above.ub::Union{NamedTuple,AbstractDict{<:VarName,<:Any}}=(;): a mapping from variable names to upper bounds for the optimisation. The bounds should be provided in the original (unlinked) space. Not all constraints are supported by Turing's optimisation interface. See details above.adtype::AbstractADType=AutoForwardDiff(): The automatic differentiation backend to use.check_model::Bool=true: if true, the model is checked for potential errors before optimisation begins.check_constraints_at_runtime::Bool=true: if true, the constraints provided vialbandubare checked at each evaluation of the log probability during optimisation (even though Optimization.jl already has access to these constraints). This can be useful in a very specific situation: consider a model where a variable has a dynamic support, e.g.y ~ truncated(Normal(); lower=x), wherexis another variable in the model. In this case, if the model is run in linked space, then the box constraints that Optimization.jl sees may not always be correct, andymay go out of its intended bounds due to changes inx. Enabling this option will ensure that such violations are caught and an error thrown. This is very cheap to do, but if you absolutely need to squeeze out every last bit of performance and you know you will not be hitting the edge case above, you can disable this check.
Any extra keyword arguments are passed to Optimization.solve.
Turing.Optimisation.make_optim_bounds_and_init — Method
make_optim_bounds_and_init(
rng::Random.AbstractRNG,
ldf::LogDensityFunction,
initial_params::AbstractInitStrategy,
lb::VarNamedTuple,
ub::VarNamedTuple,
)Generate a tuple of (lb_vec, ub_vec, init_vec) which are suitable for directly passing to Optimization.jl. All three vectors returned will be in the unlinked or linked space depending on ldf.transform_strategy, which in turn is defined by the value of link passed to mode_estimate.
The lb and ub arguments, as well as any initial_params provided as InitFromParams, are expected to be in the unlinked space.
Turing.Optimisation.maximum_a_posteriori — Method
maximum_a_posteriori(
[rng::Random.AbstractRNG,]
model::DynamicPPL.Model,
[solver];
kwargs...
)Find the maximum a posteriori estimate of a model.
This is a convenience function that calls estimate_mode with MAP() as the estimator. Please see the documentation of Turing.Optimisation.estimate_mode for full details.
Turing.Optimisation.maximum_likelihood — Method
maximum_likelihood(
[rng::Random.AbstractRNG,]
model::DynamicPPL.Model,
[solver];
kwargs...
)Find the maximum likelihood estimate of a model.
This is a convenience function that calls estimate_mode with MLE() as the estimator. Please see the documentation of Turing.Optimisation.estimate_mode for full details.
Turing.Optimisation.satisfies_constraints — Method
satisfies_constraints(lb, ub, proposed_val, dist)Check whether proposed_val satisfies the constraints defined by lb and ub.
The methods that this function provides therefore dictate what values users can specify for different types of distributions. For example, for UnivariateDistribution, the constraints must be supplied as Real numbers. If other kinds of constraints are given, it will hit the fallback method and an error will be thrown.
This method intentionally does not handle NaN values as that is left to the optimiser to deal with.
Turing.Optimisation.vector_names_and_params — Method
vector_names_and_params(m::ModeResult)Generates a vectorised form of the optimised parameters stored in the ModeResult, along with the corresponding variable names. These parameters correspond to unlinked space.
This function returns two vectors: the first contains the variable names, and the second contains the corresponding values.