Troubleshooting

This page collects a number of common error messages observed when using Turing, along with suggestions on how to fix them.

If the suggestions here do not resolve your problem, please do feel free to open an issue.

using Turing
Turing.setprogress!(false)
[ Info: [Turing]: progress logging is disabled globally
false

Initial parameters

failed to find valid initial parameters in {N} tries. This may indicate an error with the model or AD backend…

This error is seen when a Hamiltonian Monte Carlo sampler is unable to determine a valid set of initial parameters for the sampling. Here, ‘valid’ means that the log probability density of the model, as well as its gradient with respect to each parameter, is finite and not NaN.

NaN gradient

One of the most common causes of this error is having a NaN gradient. To find out whether this is happening, you can evaluate the gradient manually. Here is an example with a model that is known to be problematic:

using Turing
using DynamicPPL.TestUtils.AD: run_ad

@model function initial_bad()
    a ~ Normal()
    x ~ truncated(Normal(a), 0, Inf)
end

model = initial_bad()
adtype = AutoForwardDiff()
result = run_ad(model, adtype; test=false, benchmark=false)
result.grad_actual
[ Info: Running AD on initial_bad with ADTypes.AutoForwardDiff()
       params : [1.4763103645708402, -0.6422994190120929]
       actual : (-3.9488944754118886, [NaN, NaN])
2-element Vector{Float64}:
 NaN
 NaN

(See the DynamicPPL docs for more details on the run_ad function and its return type.)

In this case, the NaN gradient is caused by the Inf argument to truncated. (See, e.g., this issue on Distributions.jl.) Here, the upper bound of Inf is not needed, so it can be removed:

@model function initial_good()
    a ~ Normal()
    x ~ truncated(Normal(a); lower=0)
end

model = initial_good()
adtype = AutoForwardDiff()
run_ad(model, adtype; test=false, benchmark=false).grad_actual
[ Info: Running AD on initial_good with ADTypes.AutoForwardDiff()
       params : [-0.9287724087678773, -3.7284215997475636]
       actual : (-4.717110234463684, [0.4131851829805354, 0.9771034391311642])
2-element Vector{Float64}:
 0.4131851829805354
 0.9771034391311642

More generally, you could try using a different AD backend; if you don’t know why a model is returning NaN gradients, feel free to open an issue.

-Inf log density

Another cause of this error is having models with very extreme parameters. This example is taken from this Turing.jl issue:

@model function initial_bad2()
    x ~ Exponential(100)
    y ~ Uniform(0, x)
end
model = initial_bad2() | (y = 50.0,)
DynamicPPL.Model{typeof(initial_bad2), (), (), (), Tuple{}, Tuple{}, DynamicPPL.ConditionContext{@NamedTuple{y::Float64}, DynamicPPL.DefaultContext}}(initial_bad2, NamedTuple(), NamedTuple(), ConditionContext((y = 50.0,), DynamicPPL.DefaultContext()))

The problem here is that HMC attempts to find initial values for parameters inside the region of [-2, 2], after the parameters have been transformed to unconstrained space. For a distribution of Exponential(100), the appropriate transformation is log(x) (see the variable transformation docs for more info).

Thus, HMC attempts to find initial values of log(x) in the region of [-2, 2], which corresponds to x in the region of [exp(-2), exp(2)] = [0.135, 7.39]. However, all of these values of x will give rise to a zero probability density for y because the value of y = 50.0 is outside the support of Uniform(0, x). Thus, the log density of the model is -Inf, as can be seen with logjoint:

logjoint(model, (x = exp(-2),))
-Inf
logjoint(model, (x = exp(2),))
-Inf

The most direct way of fixing this is to manually provide a set of initial parameters that are valid. For example, you can obtain a set of initial parameters with rand(Vector, model), and then pass this as the initial_params keyword argument to sample:

sample(model, NUTS(), 1000; initial_params=rand(Vector, model))
Info: Found initial step size
  ϵ = 3.2
Chains MCMC chain (1000×13×1 Array{Float64, 3}):

Iterations        = 501:1:1500
Number of chains  = 1
Samples per chain = 1000
Wall duration     = 3.29 seconds
Compute duration  = 3.29 seconds
parameters        = x
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters       mean       std      mcse   ess_bulk   ess_tail      rhat       Symbol    Float64   Float64   Float64    Float64    Float64   Float64    ⋯

           x   108.1452   76.5060    6.3492   133.8290   187.9127    1.0082    ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%      75.0%      97.5%
      Symbol   Float64   Float64   Float64    Float64    Float64

           x   50.6777   63.0348   82.1548   121.5585   333.1221

More generally, you may also consider reparameterising the model to avoid such issues.

ForwardDiff type parameters

MethodError: no method matching Float64(::ForwardDiff.Dual{… The type Float64 exists, but no method is defined for this combination of argument types when trying to construct it.

A common error with ForwardDiff looks like this:

@model function forwarddiff_fail()
    x = Float64[0.0, 1.0]
    a ~ Normal()
    @show typeof(a)
    x[1] = a
    b ~ MvNormal(x, I)
end
sample(forwarddiff_fail(), NUTS(; adtype=AutoForwardDiff()), 10)
typeof(a) = Float64
typeof(a) = Float64
typeof(a) = ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}
MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3})
The type `Float64` exists, but no method is defined for this combination of argument types when trying to construct it.

Closest candidates are:
  (::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat
   @ Base rounding.jl:265
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:900
  Float64(::Float32)
   @ Base float.jl:341
  ...

Stacktrace:
  [1] convert(::Type{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3})
    @ Base ./number.jl:7
  [2] setindex!(A::Vector{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}, i::Int64)
    @ Base ./array.jl:987
  [3] forwarddiff_fail(__model__::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, __varinfo__::DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}}}, ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}, __context__::DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG})
    @ Main.Notebook ~/work/docs/docs/usage/troubleshooting/index.qmd:118
  [4] _evaluate!!(model::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, varinfo::DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}}}, ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}, context::DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG})
    @ DynamicPPL ~/.julia/packages/DynamicPPL/I9lST/src/model.jl:913
  [5] evaluate_threadunsafe!!(model::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, varinfo::DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}}}, ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}, context::DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG})
    @ DynamicPPL ~/.julia/packages/DynamicPPL/I9lST/src/model.jl:886
  [6] evaluate!!(model::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, varinfo::DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}}}, ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}, context::DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG})
    @ DynamicPPL ~/.julia/packages/DynamicPPL/I9lST/src/model.jl:834
  [7] logdensity_at(x::Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}, model::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, varinfo::DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, context::DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG})
    @ DynamicPPL ~/.julia/packages/DynamicPPL/I9lST/src/logdensityfunction.jl:183
  [8] (::DifferentiationInterface.FixTail{typeof(DynamicPPL.logdensity_at), Tuple{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}}})(args::Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}})
    @ DifferentiationInterface ~/.julia/packages/DifferentiationInterface/zJHX8/src/utils/context.jl:169
  [9] vector_mode_dual_eval!(f::DifferentiationInterface.FixTail{typeof(DynamicPPL.logdensity_at), Tuple{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}}}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}}, x::Vector{Float64})
    @ ForwardDiff ~/.julia/packages/ForwardDiff/UBbGT/src/apiutils.jl:24
 [10] vector_mode_gradient!(result::DiffResults.MutableDiffResult{1, Float64, Tuple{Vector{Float64}}}, f::DifferentiationInterface.FixTail{typeof(DynamicPPL.logdensity_at), Tuple{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}}}, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}})
    @ ForwardDiff ~/.julia/packages/ForwardDiff/UBbGT/src/gradient.jl:98
 [11] gradient!(result::DiffResults.MutableDiffResult{1, Float64, Tuple{Vector{Float64}}}, f::DifferentiationInterface.FixTail{typeof(DynamicPPL.logdensity_at), Tuple{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}}}, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}}, ::Val{false})
    @ ForwardDiff ~/.julia/packages/ForwardDiff/UBbGT/src/gradient.jl:39
 [12] value_and_gradient(::typeof(DynamicPPL.logdensity_at), ::DifferentiationInterfaceForwardDiffExt.ForwardDiffGradientPrep{Nothing, ForwardDiff.GradientConfig{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3, Vector{ForwardDiff.Dual{ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}, Float64, 3}}}, Tuple{Nothing, Nothing, Nothing}}, ::AutoForwardDiff{3, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}}, ::Vector{Float64}, ::DifferentiationInterface.Constant{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}}, ::DifferentiationInterface.Constant{DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}}, ::DifferentiationInterface.Constant{DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}})
    @ DifferentiationInterfaceForwardDiffExt ~/.julia/packages/DifferentiationInterface/zJHX8/ext/DifferentiationInterfaceForwardDiffExt/onearg.jl:417
 [13] logdensity_and_gradient(f::LogDensityFunction{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}, AutoForwardDiff{3, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}}}, x::Vector{Float64})
    @ DynamicPPL ~/.julia/packages/DynamicPPL/I9lST/src/logdensityfunction.jl:234
 [14] Fix1
    @ ./operators.jl:1127 [inlined]
 [15] ∂H∂θ(h::AdvancedHMC.Hamiltonian{AdvancedHMC.DiagEuclideanMetric{Float64, Vector{Float64}}, AdvancedHMC.GaussianKinetic, Base.Fix1{typeof(LogDensityProblems.logdensity), LogDensityFunction{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}, AutoForwardDiff{3, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}}}}, Base.Fix1{typeof(LogDensityProblems.logdensity_and_gradient), LogDensityFunction{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}, AutoForwardDiff{3, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}}}}}, θ::Vector{Float64})
    @ AdvancedHMC ~/.julia/packages/AdvancedHMC/PZso2/src/hamiltonian.jl:46
 [16] phasepoint(h::AdvancedHMC.Hamiltonian{AdvancedHMC.DiagEuclideanMetric{Float64, Vector{Float64}}, AdvancedHMC.GaussianKinetic, Base.Fix1{typeof(LogDensityProblems.logdensity), LogDensityFunction{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}, AutoForwardDiff{3, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}}}}, Base.Fix1{typeof(LogDensityProblems.logdensity_and_gradient), LogDensityFunction{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}, AutoForwardDiff{3, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}}}}}, θ::Vector{Float64}, r::Vector{Float64})
    @ AdvancedHMC ~/.julia/packages/AdvancedHMC/PZso2/src/hamiltonian.jl:103
 [17] phasepoint
    @ ~/.julia/packages/AdvancedHMC/PZso2/src/hamiltonian.jl:185 [inlined]
 [18] find_initial_params(rng::Random.TaskLocalRNG, model::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, varinfo::DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, hamiltonian::AdvancedHMC.Hamiltonian{AdvancedHMC.DiagEuclideanMetric{Float64, Vector{Float64}}, AdvancedHMC.GaussianKinetic, Base.Fix1{typeof(LogDensityProblems.logdensity), LogDensityFunction{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}, AutoForwardDiff{3, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}}}}, Base.Fix1{typeof(LogDensityProblems.logdensity_and_gradient), LogDensityFunction{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}, AutoForwardDiff{3, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}}}}}; max_attempts::Int64)
    @ Turing.Inference ~/.julia/packages/Turing/1Egt9/src/mcmc/hmc.jl:156
 [19] find_initial_params(rng::Random.TaskLocalRNG, model::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, varinfo::DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, hamiltonian::AdvancedHMC.Hamiltonian{AdvancedHMC.DiagEuclideanMetric{Float64, Vector{Float64}}, AdvancedHMC.GaussianKinetic, Base.Fix1{typeof(LogDensityProblems.logdensity), LogDensityFunction{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}, AutoForwardDiff{3, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}}}}, Base.Fix1{typeof(LogDensityProblems.logdensity_and_gradient), LogDensityFunction{DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}, DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random.TaskLocalRNG}, AutoForwardDiff{3, ForwardDiff.Tag{DynamicPPL.DynamicPPLTag, Float64}}}}})
    @ Turing.Inference ~/.julia/packages/Turing/1Egt9/src/mcmc/hmc.jl:145
 [20] initialstep(rng::Random.TaskLocalRNG, model::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, vi_original::DynamicPPL.VarInfo{@NamedTuple{a::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:a, typeof(identity)}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:a, typeof(identity)}}, Vector{Float64}}, b::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:b, typeof(identity)}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:b, typeof(identity)}}, Vector{Float64}}}, Float64}; initial_params::Nothing, nadapts::Int64, kwargs::@Kwargs{})
    @ Turing.Inference ~/.julia/packages/Turing/1Egt9/src/mcmc/hmc.jl:210
 [21] step(rng::Random.TaskLocalRNG, model::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}; initial_params::Nothing, kwargs::@Kwargs{nadapts::Int64})
    @ DynamicPPL ~/.julia/packages/DynamicPPL/I9lST/src/sampler.jl:125
 [22] step
    @ ~/.julia/packages/DynamicPPL/I9lST/src/sampler.jl:108 [inlined]
 [23] macro expansion
    @ ~/.julia/packages/AbstractMCMC/kwj9g/src/sample.jl:161 [inlined]
 [24] macro expansion
    @ ~/.julia/packages/AbstractMCMC/kwj9g/src/logging.jl:16 [inlined]
 [25] mcmcsample(rng::Random.TaskLocalRNG, model::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, N::Int64; progress::Bool, progressname::String, callback::Nothing, num_warmup::Int64, discard_initial::Int64, thinning::Int64, chain_type::Type, initial_state::Nothing, kwargs::@Kwargs{nadapts::Int64})
    @ AbstractMCMC ~/.julia/packages/AbstractMCMC/kwj9g/src/sample.jl:144
 [26] sample(rng::Random.TaskLocalRNG, model::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}}, N::Int64; chain_type::Type, resume_from::Nothing, initial_state::Nothing, progress::Bool, nadapts::Int64, discard_adapt::Bool, discard_initial::Int64, kwargs::@Kwargs{})
    @ Turing.Inference ~/.julia/packages/Turing/1Egt9/src/mcmc/hmc.jl:117
 [27] sample
    @ ~/.julia/packages/Turing/1Egt9/src/mcmc/hmc.jl:86 [inlined]
 [28] #sample#101
    @ ~/.julia/packages/Turing/1Egt9/src/mcmc/abstractmcmc.jl:29 [inlined]
 [29] sample
    @ ~/.julia/packages/Turing/1Egt9/src/mcmc/abstractmcmc.jl:20 [inlined]
 [30] #sample#100
    @ ~/.julia/packages/Turing/1Egt9/src/mcmc/abstractmcmc.jl:17 [inlined]
 [31] sample(model::DynamicPPL.Model{typeof(forwarddiff_fail), (), (), (), Tuple{}, Tuple{}, DynamicPPL.DefaultContext}, alg::NUTS{AutoForwardDiff{nothing, Nothing}, AdvancedHMC.DiagEuclideanMetric}, N::Int64)
    @ Turing.Inference ~/.julia/packages/Turing/1Egt9/src/mcmc/abstractmcmc.jl:14
 [32] top-level scope
    @ ~/work/docs/docs/usage/troubleshooting/index.qmd:121

The problem here is the line x[1] = a. When the log probability density of the model is calculated, a is sampled from a normal distribution and is thus a Float64; however, when ForwardDiff calculates the gradient of the log density, a is a ForwardDiff.Dual object. However, x is always a Vector{Float64}, and the call x[1] = a attempts to insert a Dual object into a Vector{Float64}, which is not allowed.

Note

In more depth: the basic premise of ForwardDiff is that functions have to accept Real parameters instead of Float64 (since Dual is a subtype of Real). Here, the line x[1] = a is equivalent to setindex!(x, a, 1), and although the method setindex!(::Vector{Float64}, ::Real, ...) does exist, it attempts to convert the Real into a Float64, which is where it fails.

There are two ways around this.

Firstly, you could broaden the type of the container:

@model function forwarddiff_working1()
    x = Real[0.0, 1.0]
    a ~ Normal()
    x[1] = a
    b ~ MvNormal(x, I)
end
sample(forwarddiff_working1(), NUTS(; adtype=AutoForwardDiff()), 10)
Info: Found initial step size
  ϵ = 1.6
Chains MCMC chain (10×15×1 Array{Float64, 3}):

Iterations        = 6:1:15
Number of chains  = 1
Samples per chain = 10
Wall duration     = 3.12 seconds
Compute duration  = 3.12 seconds
parameters        = a, b[1], b[2]
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   e ⋯
      Symbol   Float64   Float64   Float64    Float64    Float64   Float64     ⋯

           a    0.1298    1.8899    0.9410     4.3154     4.0323    1.9676     ⋯
        b[1]    0.3780    3.0588    1.4806     7.2823     4.0323    2.5442     ⋯
        b[2]    0.2425    0.6194    0.1959     7.2386    10.0000    1.1092     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

           a   -2.0009   -2.0009    0.9631    1.5633    2.3625
        b[1]   -3.1599   -3.1599    2.4162    2.6011    3.2661
        b[2]   -0.9414    0.3126    0.3211    0.6660    0.8927

This is generally unfavourable because the Vector{Real} type contains an abstract type parameter. As a result, memory allocation is less efficient (because the compiler does not know the size of each vector’s elements). Furthermore, the compiler cannot infer the type of x[1], which can lead to type stability issues (to see this in action, run x = Real[0.0, 1.0]; @code_warntype x[1] in the Julia REPL).

A better solution is to pass a type as a parameter to the model:

@model function forwarddiff_working2(::Type{T}=Float64) where T
    x = T[0.0, 1.0]
    a ~ Normal()
    x[1] = a
    b ~ MvNormal(x, I)
end
sample(forwarddiff_working2(), NUTS(; adtype=AutoForwardDiff()), 10)
Info: Found initial step size
  ϵ = 1.6
Chains MCMC chain (10×15×1 Array{Float64, 3}):

Iterations        = 6:1:15
Number of chains  = 1
Samples per chain = 10
Wall duration     = 1.46 seconds
Compute duration  = 1.46 seconds
parameters        = a, b[1], b[2]
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   e ⋯
      Symbol   Float64   Float64   Float64    Float64    Float64   Float64     ⋯

           a    0.0714    0.9628    0.4384     3.9619    10.0000    2.1077     ⋯
        b[1]   -0.8456    0.8057    0.2548    10.0000    10.0000    0.9388     ⋯
        b[2]    1.0669    1.1668    0.3690    10.0000     5.1546    1.1440     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

           a   -1.7448   -0.4779    0.5725    0.6403    1.0746
        b[1]   -1.9625   -1.6054   -0.6651   -0.4702    0.2196
        b[2]   -0.7682    0.5703    1.2544    1.7450    2.5835
Back to top