API: Turing.Optimisation

SciMLBase.OptimizationProblemMethod
OptimizationProblem(log_density::OptimLogDensity, initial_params::AbstractVector, adtype, constraints)

Create an OptimizationProblem for the objective function defined by log_density.

Note that the adtype parameter here overrides any adtype parameter the OptimLogDensity was constructed with.

source
Turing.Optimisation.ModeEstimationConstraintsType
ModeEstimationConstraints

A struct that holds constraints for mode estimation problems.

The fields are the same as possible constraints supported by the Optimization.jl: ub and lb specify lower and upper bounds of box constraints. cons is a function that takes the parameters of the model and returns a list of derived quantities, which are then constrained by the lower and upper bounds set by lcons and ucons. We refer to these as generic constraints. Please see the documentation of Optimization.jl for more details.

Any of the fields can be nothing, disabling the corresponding constraints.

source
Turing.Optimisation.ModeEstimatorType
ModeEstimator

An abstract type to mark whether mode estimation is to be done with maximum a posteriori (MAP) or maximum likelihood estimation (MLE).

source
Turing.Optimisation.ModeResultType
ModeResult{
    V<:NamedArrays.NamedArray,
    O<:Any,
    M<:OptimLogDensity,
    P<:AbstractDict{<:VarName,<:Any}
    E<:ModeEstimator,
}

A wrapper struct to store various results from a MAP or MLE estimation.

Fields

  • values::NamedArrays.NamedArray: A vector with the resulting point estimates.

  • optim_result::Any: The stored optimiser results.

  • lp::Float64: The final log likelihood or log joint, depending on whether MAP or MLE was run.

  • f::Turing.Optimisation.OptimLogDensity: The evaluation function used to calculate the output.

  • params::AbstractDict{<:AbstractPPL.VarName}: Dictionary of parameter values

  • linked::Bool: Whether the optimization was done in a transformed space.

  • estimator::Turing.Optimisation.ModeEstimator: The type of mode estimation (MAP or MLE).

source
Turing.Optimisation.ModeResultMethod
ModeResult(
    log_density::OptimLogDensity,
    solution::SciMLBase.OptimizationSolution,
    linked::Bool,
    estimator::ModeEstimator,
)

Create a ModeResult for a given log_density objective and a solution given by solve. The linked argument indicates whether the optimization was done in a transformed space.

Optimization.solve returns its own result type. This function converts that into the richer format of ModeResult. It also takes care of transforming them back to the original parameter space in case the optimization was done in a transformed space.

source
Turing.Optimisation.OptimLogDensityType
OptimLogDensity{L<:DynamicPPL.LogDensityFunction}

A struct that represents a log-density function, which can be used with Optimization.jl. This is a thin wrapper around DynamicPPL.LogDensityFunction: the main difference is that the log-density is negated (because Optimization.jl performs minimisation, and we usually want to maximise the log-density).

An OptimLogDensity does not, in itself, obey the LogDensityProblems.jl interface. Thus, if you want to calculate the log density of its contents at the point z, you should manually call LogDensityProblems.logdensity(f.ldf, z), instead of LogDensityProblems.logdensity(f, z).

However, because Optimization.jl requires the objective function to be callable, you can also call f(z) directly to get the negative log density at z.

source
Turing.Optimisation.OptimLogDensityMethod
(f::OptimLogDensity)(z)
(f::OptimLogDensity)(z, _)

Evaluate the negative log probability density at the array z. Which kind of probability density is evaluated depends on the getlogdensity function used to construct the underlying LogDensityFunction (e.g., DynamicPPL.getlogjoint for MAP estimation, or DynamicPPL.getloglikelihood for MLE).

Any second argument is ignored. The two-argument method only exists to match the interface required by Optimization.jl.

source
Base.getMethod
Base.get(m::ModeResult, var_symbol::Symbol)
Base.get(m::ModeResult, var_symbols::AbstractVector{Symbol})

Return the values of all the variables with the symbol(s) var_symbol in the mode result m. The return value is a NamedTuple with var_symbols as the key(s). The second argument should be either a Symbol or a vector of Symbols.

source
StatsAPI.coeftableMethod
StatsBase.coeftable(m::ModeResult; level::Real=0.95, numerrors_warnonly::Bool=true)

Return a table with coefficients and related statistics of the model. level determines the level for confidence intervals (by default, 95%).

In case the numerrors_warnonly argument is true (the default) numerical errors encountered during the computation of the standard errors will be caught and reported in an extra "Error notes" column.

source
Turing.Optimisation.estimate_modeFunction
estimate_mode(
    model::DynamicPPL.Model,
    estimator::ModeEstimator,
    [solver];
    kwargs...
)

Find the mode of the probability distribution of a model.

Under the hood this function calls Optimization.solve.

Arguments

  • model::DynamicPPL.Model: The model for which to estimate the mode.
  • estimator::ModeEstimator: Can be either MLE() for maximum likelihood estimation or MAP() for maximum a posteriori estimation.
  • solver=nothing. The optimization algorithm to use. Optional. Can be any solver recognised by Optimization.jl. If omitted a default solver is used: LBFGS, or IPNewton if non-box constraints are present.

Keyword arguments

  • check_model::Bool=true: If true, the model is checked for errors before optimisation begins.
  • initial_params::Union{AbstractVector,Nothing}=nothing: Initial value for the optimization. Optional, unless non-box constraints are specified. If omitted it is generated by either sampling from the prior distribution or uniformly from the box constraints, if any.
  • adtype::AbstractADType=AutoForwardDiff(): The automatic differentiation type to use.
  • Keyword arguments lb, ub, cons, lcons, and ucons define constraints for the optimization problem. Please see ModeEstimationConstraints for more details.
  • Any extra keyword arguments are passed to Optimization.solve.
source
Turing.Optimisation.generate_initial_paramsMethod
generate_initial_params(model::DynamicPPL.Model, initial_params, constraints)

Generate an initial value for the optimization problem.

If initial_params is not nothing, a copy of it is returned. Otherwise initial parameter values are generated either by sampling from the prior (if no constraints are present) or uniformly from the box constraints. If generic constraints are set, an error is thrown.

source