API: Turing.Optimisation
SciMLBase.OptimizationProblem
— MethodOptimizationProblem(log_density::OptimLogDensity, adtype, constraints)
Create an OptimizationProblem
for the objective function defined by log_density
.
Note that the adtype parameter here overrides any adtype parameter the OptimLogDensity was constructed with.
Turing.Optimisation.MAP
— TypeMAP <: ModeEstimator
Concrete type for maximum a posteriori estimation. Only used for the Optim.jl interface.
Turing.Optimisation.MLE
— TypeMLE <: ModeEstimator
Concrete type for maximum likelihood estimation. Only used for the Optim.jl interface.
Turing.Optimisation.ModeEstimationConstraints
— TypeModeEstimationConstraints
A struct that holds constraints for mode estimation problems.
The fields are the same as possible constraints supported by the Optimization.jl: ub
and lb
specify lower and upper bounds of box constraints. cons
is a function that takes the parameters of the model and returns a list of derived quantities, which are then constrained by the lower and upper bounds set by lcons
and ucons
. We refer to these as generic constraints. Please see the documentation of Optimization.jl for more details.
Any of the fields can be nothing
, disabling the corresponding constraints.
Turing.Optimisation.ModeEstimator
— TypeModeEstimator
An abstract type to mark whether mode estimation is to be done with maximum a posteriori (MAP) or maximum likelihood estimation (MLE). This is only needed for the Optim.jl interface.
Turing.Optimisation.ModeResult
— TypeModeResult{
V<:NamedArrays.NamedArray,
M<:NamedArrays.NamedArray,
O<:Optim.MultivariateOptimizationResults,
S<:NamedArrays.NamedArray
}
A wrapper struct to store various results from a MAP or MLE estimation.
Turing.Optimisation.ModeResult
— MethodModeResult(log_density::OptimLogDensity, solution::SciMLBase.OptimizationSolution)
Create a ModeResult
for a given log_density
objective and a solution
given by solve
.
Optimization.solve
returns its own result type. This function converts that into the richer format of ModeResult
. It also takes care of transforming them back to the original parameter space in case the optimization was done in a transformed space.
Turing.Optimisation.OptimLogDensity
— TypeOptimLogDensity{
M<:DynamicPPL.Model,
V<:DynamicPPL.VarInfo,
C<:OptimizationContext,
AD<:ADTypes.AbstractADType
}
A struct that wraps a single LogDensityFunction. Can be invoked either using
OptimLogDensity(model, varinfo, ctx; adtype=adtype)
or
OptimLogDensity(model, ctx; adtype=adtype)
If not specified, adtype
defaults to AutoForwardDiff()
.
An OptimLogDensity does not, in itself, obey the LogDensityProblems interface. Thus, if you want to calculate the log density of its contents at the point z
, you should manually call
LogDensityProblems.logdensity(f.ldf, z)
However, it is a callable object which returns the negative log density of the underlying LogDensityFunction at the point z
. This is done to satisfy the Optim.jl interface.
optim_ld = OptimLogDensity(model, varinfo, ctx)
optim_ld(z) # returns -logp
Turing.Optimisation.OptimLogDensity
— Method(f::OptimLogDensity)(z)
(f::OptimLogDensity)(z, _)
Evaluate the negative log joint or log likelihood at the array z
. Which one is evaluated depends on the context of f
.
Any second argument is ignored. The two-argument method only exists to match interface the required by Optimization.jl.
Turing.Optimisation.OptimizationContext
— TypeOptimizationContext{C<:AbstractContext} <: AbstractContext
The OptimizationContext
transforms variables to their constrained space, but does not use the density with respect to the transformation. This context is intended to allow an optimizer to sample in R^n freely.
Base.get
— MethodBase.get(m::ModeResult, var_symbol::Symbol)
Base.get(m::ModeResult, var_symbols::AbstractVector{Symbol})
Return the values of all the variables with the symbol(s) var_symbol
in the mode result m
. The return value is a NamedTuple
with var_symbols
as the key(s). The second argument should be either a Symbol
or a vector of Symbol
s.
StatsAPI.coeftable
— MethodStatsBase.coeftable(m::ModeResult; level::Real=0.95, numerrors_warnonly::Bool=true)
Return a table with coefficients and related statistics of the model. level determines the level for confidence intervals (by default, 95%).
In case the numerrors_warnonly
argument is true (the default) numerical errors encountered during the computation of the standard errors will be caught and reported in an extra "Error notes" column.
Turing.Optimisation.estimate_mode
— Functionestimate_mode(
model::DynamicPPL.Model,
estimator::ModeEstimator,
[solver];
kwargs...
)
Find the mode of the probability distribution of a model.
Under the hood this function calls Optimization.solve
.
Arguments
model::DynamicPPL.Model
: The model for which to estimate the mode.estimator::ModeEstimator
: Can be eitherMLE()
for maximum likelihood estimation orMAP()
for maximum a posteriori estimation.solver=nothing
. The optimization algorithm to use. Optional. Can be any solver recognised by Optimization.jl. If omitted a default solver is used: LBFGS, or IPNewton if non-box constraints are present.
Keyword arguments
initial_params::Union{AbstractVector,Nothing}=nothing
: Initial value for the optimization. Optional, unless non-box constraints are specified. If omitted it is generated by either sampling from the prior distribution or uniformly from the box constraints, if any.adtype::AbstractADType=AutoForwardDiff()
: The automatic differentiation type to use.- Keyword arguments
lb
,ub
,cons
,lcons
, anducons
define constraints for the optimization problem. Please seeModeEstimationConstraints
for more details. - Any extra keyword arguments are passed to
Optimization.solve
.
Turing.Optimisation.generate_initial_params
— Methodgenerate_initial_params(model::DynamicPPL.Model, initial_params, constraints)
Generate an initial value for the optimization problem.
If initial_params
is not nothing
, a copy of it is returned. Otherwise initial parameter values are generated either by sampling from the prior (if no constraints are present) or uniformly from the box constraints. If generic constraints are set, an error is thrown.
Turing.Optimisation.maximum_a_posteriori
— Methodmaximum_a_posteriori(
model::DynamicPPL.Model,
[solver];
kwargs...
)
Find the maximum a posteriori estimate of a model.
This is a convenience function that calls estimate_mode
with MAP()
as the estimator. Please see the documentation of Turing.Optimisation.estimate_mode
for more details.
Turing.Optimisation.maximum_likelihood
— Methodmaximum_likelihood(
model::DynamicPPL.Model,
[solver];
kwargs...
)
Find the maximum likelihood estimate of a model.
This is a convenience function that calls estimate_mode
with MLE()
as the estimator. Please see the documentation of Turing.Optimisation.estimate_mode
for more details.