using Turing
setprogress!(false) Turing.
[ Info: [Turing]: progress logging is disabled globally [ Info: [AdvancedVI]: global PROGRESS is set as false
false
This page collects a number of common error messages observed when using Turing, along with suggestions on how to fix them.
If the suggestions here do not resolve your problem, please do feel free to open an issue.
[ Info: [Turing]: progress logging is disabled globally [ Info: [AdvancedVI]: global PROGRESS is set as false
false
failed to find valid initial parameters in {N} tries. This may indicate an error with the model or AD backend…
This error is seen when a Hamiltonian Monte Carlo sampler is unable to determine a valid set of initial parameters for the sampling. Here, ‘valid’ means that the log probability density of the model, as well as its gradient with respect to each parameter, is finite and not NaN
.
NaN
gradientOne of the most common causes of this error is having a NaN
gradient. To find out whether this is happening, you can evaluate the gradient manually. Here is an example with a model that is known to be problematic:
using Turing
using DynamicPPL.TestUtils.AD: run_ad
@model function initial_bad()
a ~ Normal()
x ~ truncated(Normal(a), 0, Inf)
end
model = initial_bad()
adtype = AutoForwardDiff()
result = run_ad(model, adtype; test=false, benchmark=false)
result.grad_actual
[ Info: Running AD on initial_bad with ADTypes.AutoForwardDiff() params : [-1.0773950276180524, -0.3879370633580274] actual : (-2.386241737335236, [NaN, NaN])
2-element Vector{Float64}:
NaN
NaN
(See the DynamicPPL docs for more details on the run_ad
function and its return type.)
In this case, the NaN
gradient is caused by the Inf
argument to truncated
. (See, e.g., this issue on Distributions.jl.) Here, the upper bound of Inf
is not needed, so it can be removed:
@model function initial_good()
a ~ Normal()
x ~ truncated(Normal(a); lower=0)
end
model = initial_good()
adtype = AutoForwardDiff()
run_ad(model, adtype; test=false, benchmark=false).grad_actual
[ Info: Running AD on initial_good with ADTypes.AutoForwardDiff() params : [0.48919039004292003, 0.30782088199189594] actual : (-1.6547824865650456, [-0.13265398599487538, -0.18532139378306334])
2-element Vector{Float64}:
-0.13265398599487538
-0.18532139378306334
More generally, you could try using a different AD backend; if you don’t know why a model is returning NaN
gradients, feel free to open an issue.
-Inf
log densityAnother cause of this error is having models with very extreme parameters. This example is taken from this Turing.jl issue:
@model function initial_bad2()
x ~ Exponential(100)
y ~ Uniform(0, x)
end
model = initial_bad2() | (y = 50.0,)
DynamicPPL.Model{typeof(initial_bad2), (), (), (), Tuple{}, Tuple{}, DynamicPPL.ConditionContext{@NamedTuple{y::Float64}, DynamicPPL.DefaultContext}}(initial_bad2, NamedTuple(), NamedTuple(), ConditionContext((y = 50.0,), DynamicPPL.DefaultContext()))
The problem here is that HMC attempts to find initial values for parameters inside the region of [-2, 2]
, after the parameters have been transformed to unconstrained space. For a distribution of Exponential(100)
, the appropriate transformation is log(x)
(see the variable transformation docs for more info).
Thus, HMC attempts to find initial values of log(x)
in the region of [-2, 2]
, which corresponds to x
in the region of [exp(-2), exp(2)]
= [0.135, 7.39]
. However, all of these values of x
will give rise to a zero probability density for y
because the value of y = 50.0
is outside the support of Uniform(0, x)
. Thus, the log density of the model is -Inf
, as can be seen with logjoint
:
The most direct way of fixing this is to manually provide a set of initial parameters that are valid. For example, you can obtain a set of initial parameters with rand(Vector, model)
, and then pass this as the initial_params
keyword argument to sample
:
┌ Info: Found initial step size └ ϵ = 0.4
Chains MCMC chain (1000×13×1 Array{Float64, 3}): Iterations = 501:1:1500 Number of chains = 1 Samples per chain = 1000 Wall duration = 3.73 seconds Compute duration = 3.73 seconds parameters = x internals = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size Summary Statistics parameters mean std mcse ess_bulk ess_tail rhat ⋯ Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯ x 107.8741 75.7366 4.8628 228.1402 407.4773 1.0102 ⋯ 1 column omitted Quantiles parameters 2.5% 25.0% 50.0% 75.0% 97.5% Symbol Float64 Float64 Float64 Float64 Float64 x 51.3740 62.5622 80.6869 121.9969 320.4487
More generally, you may also consider reparameterising the model to avoid such issues.