Using External Samplers

Using External Samplers on Turing Models

Turing provides several wrapped samplers from external sampling libraries, e.g., HMC samplers from AdvancedHMC. These wrappers allow new users to seamlessly sample statistical models without leaving Turing However, these wrappers might only sometimes be complete, missing some functionality from the wrapped sampling library. Moreover, users might want to use samplers currently not wrapped within Turing.

For these reasons, Turing also makes running external samplers on Turing models easy without any necessary modifications or wrapping! Throughout, we will use a 10-dimensional Neal’s funnel as a running example::

# Import libraries.
using Turing, Random, LinearAlgebra

d = 10
@model function funnel()
    θ ~ Truncated(Normal(0, 3), -3, 3)
    z ~ MvNormal(zeros(d - 1), exp(θ) * I)
    return x ~ MvNormal(z, I)
end
funnel (generic function with 2 methods)

Now we sample the model to generate some observations, which we can then condition on.

(; x) = rand(funnel() |=0,))
model = funnel() | (; x);

Users can use any sampler algorithm to sample this model if it follows the AbstractMCMC API. Before discussing how this is done in practice, giving a high-level description of the process is interesting. Imagine that we created an instance of an external sampler that we will call spl such that typeof(spl)<:AbstractMCMC.AbstractSampler. In order to avoid type ambiguity within Turing, at the moment it is necessary to declare spl as an external sampler to Turing espl = externalsampler(spl), where externalsampler(s::AbstractMCMC.AbstractSampler) is a Turing function that types our external sampler adequately.

An excellent point to start to show how this is done in practice is by looking at the sampling library AdvancedMH (AdvancedMH’s GitHub) for Metropolis-Hastings (MH) methods. Let’s say we want to use a random walk Metropolis-Hastings sampler without specifying the proposal distributions. The code below constructs an MH sampler using a multivariate Gaussian distribution with zero mean and unit variance in d dimensions as a random walk proposal.

# Importing the sampling library
using AdvancedMH
rwmh = AdvancedMH.RWMH(d)
MetropolisHastings{RandomWalkProposal{false, ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}}}(RandomWalkProposal{false, ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}}(ZeroMeanIsoNormal(
dim: 10
μ: Zeros(10)
Σ: [1.0 0.0 … 0.0 0.0; 0.0 1.0 … 0.0 0.0; … ; 0.0 0.0 … 1.0 0.0; 0.0 0.0 … 0.0 1.0]
)
))
setprogress!(false)

Sampling is then as easy as:

chain = sample(model, externalsampler(rwmh), 10_000)
Chains MCMC chain (10000×11×1 Array{Float64, 3}):

Iterations        = 1:1:10000
Number of chains  = 1
Samples per chain = 10000
Wall duration     = 3.88 seconds
Compute duration  = 3.88 seconds
parameters        = θ, z[1], z[2], z[3], z[4], z[5], z[6], z[7], z[8], z[9]
internals         = lp

Summary Statistics
  parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   e ⋯
      Symbol   Float64   Float64   Float64    Float64    Float64   Float64     ⋯

           θ   -0.2887    0.8354    0.1410    33.9767    31.2925    1.1096     ⋯
        z[1]    0.7354    0.7615    0.0967    63.1402    59.8449    1.0294     ⋯
        z[2]    0.2371    0.6567    0.0612   110.2722   134.7479    1.0336     ⋯
        z[3]   -0.2664    0.6655    0.0536   150.5021   174.6634    1.0023     ⋯
        z[4]   -0.2321    0.7102    0.0767    83.8256    98.0170    1.0071     ⋯
        z[5]   -0.7771    0.8501    0.1188    48.0652    98.3760    1.0331     ⋯
        z[6]    0.1507    0.6867    0.0668   110.3661    99.3781    1.0007     ⋯
        z[7]   -0.5141    0.7549    0.0926    68.3697   146.6011    1.0183     ⋯
        z[8]    0.2931    0.7163    0.0743    93.5663    96.4049    1.0174     ⋯
        z[9]   -0.9451    0.6967    0.0856    70.3081   133.6144    1.0232     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

           θ   -1.7973   -0.9137   -0.2858    0.1912    1.5516
        z[1]   -0.5751    0.2635    0.6598    1.2855    2.2410
        z[2]   -0.8971   -0.2035    0.2463    0.6060    1.5808
        z[3]   -1.7329   -0.6208   -0.3433    0.0733    1.1250
        z[4]   -1.5733   -0.6983   -0.2249    0.1833    1.0845
        z[5]   -2.5157   -1.3479   -0.7304   -0.1742    0.7107
        z[6]   -1.0701   -0.2733    0.0820    0.5380    1.7838
        z[7]   -2.1361   -1.0350   -0.4394    0.0512    0.7769
        z[8]   -1.1204   -0.1209    0.3531    0.6697    1.6198
        z[9]   -2.3885   -1.3902   -0.8413   -0.5065    0.2321

Going beyond the Turing API

As previously mentioned, the Turing wrappers can often limit the capabilities of the sampling libraries they wrap. AdvancedHMC1 (AdvancedHMC’s GitHub) is a clear example of this. A common practice when performing HMC is to provide an initial guess for the mass matrix. However, the native HMC sampler within Turing only allows the user to specify the type of the mass matrix despite the two options being possible within AdvancedHMC. Thankfully, we can use Turing’s support for external samplers to define an HMC sampler with a custom mass matrix in AdvancedHMC and then use it to sample our Turing model.

We will use the library Pathfinder2 ((Pathfinder’s GitHub)[https://github.com/mlcolab/Pathfinder.jl]) to construct our estimate of mass matrix. Pathfinder is a variational inference algorithm that first finds the maximum a posteriori (MAP) estimate of a target posterior distribution and then uses the trace of the optimization to construct a sequence of multivariate normal approximations to the target distribution. In this process, Pathfinder computes an estimate of the mass matrix the user can access.

The code below shows this can be done in practice.

using AdvancedHMC, Pathfinder
# Running pathfinder
draws = 1_000
result_multi = multipathfinder(model, draws; nruns=8)

# Estimating the metric
inv_metric = result_multi.pathfinder_results[1].fit_distribution.Σ
metric = DenseEuclideanMetric(Matrix(inv_metric))

# Creating an AdvancedHMC NUTS sampler with the custom metric.
n_adapts = 1000 # Number of adaptation steps
tap = 0.9 # Large target acceptance probability to deal with the funnel structure of the posterior
nuts = AdvancedHMC.NUTS(tap; metric=metric)

# Sample
chain = sample(model, externalsampler(nuts), 10_000; n_adapts=1_000)
┌ Warning: Pareto shape k = 1 > 1. Corresponding importance sampling estimates are likely to be unstable and are unlikely to converge with additional samples.
└ @ PSIS ~/.julia/packages/PSIS/fU76x/src/core.jl:362
[ Info: Found initial step size 1.6
Chains MCMC chain (10000×23×1 Array{Float64, 3}):

Iterations        = 1:1:10000
Number of chains  = 1
Samples per chain = 10000
Wall duration     = 6.06 seconds
Compute duration  = 6.06 seconds
parameters        = θ, z[1], z[2], z[3], z[4], z[5], z[6], z[7], z[8], z[9]
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size, is_adapt

Summary Statistics
  parameters      mean       std      mcse     ess_bulk    ess_tail      rhat  ⋯
      Symbol   Float64   Float64   Float64      Float64     Float64   Float64  ⋯

           θ   -0.8311    1.0697    0.0277    1435.7458   1651.0064    0.9999  ⋯
        z[1]    0.5623    0.6752    0.0094    5825.8633   5702.5742    1.0000  ⋯
        z[2]    0.2050    0.5974    0.0065    8893.3829   6102.8531    1.0003  ⋯
        z[3]   -0.2499    0.6120    0.0064    9643.6243   6644.0152    1.0001  ⋯
        z[4]   -0.1176    0.5804    0.0057   10566.6969   6478.7247    1.0001  ⋯
        z[5]   -0.6702    0.7091    0.0110    4566.6082   6232.2213    1.0000  ⋯
        z[6]    0.0876    0.5808    0.0052   13096.2969   5506.1012    1.0005  ⋯
        z[7]   -0.3387    0.6212    0.0074    7946.9319   5016.4545    0.9999  ⋯
        z[8]    0.2368    0.5972    0.0061   10412.5338   5753.5268    1.0000  ⋯
        z[9]   -0.7349    0.7460    0.0118    4351.4760   5713.9696    1.0003  ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

           θ   -2.7915   -1.6383   -0.7824   -0.0352    1.1371
        z[1]   -0.5386    0.1032    0.4489    0.9526    2.1393
        z[2]   -0.9181   -0.1669    0.1570    0.5312    1.5555
        z[3]   -1.5859   -0.5961   -0.2030    0.1324    0.8828
        z[4]   -1.3621   -0.4466   -0.0954    0.2295    1.0294
        z[5]   -2.2922   -1.0877   -0.5498   -0.1605    0.4327
        z[6]   -1.0703   -0.2586    0.0648    0.4246    1.3186
        z[7]   -1.7127   -0.6951   -0.2670    0.0621    0.7488
        z[8]   -0.8600   -0.1413    0.1919    0.5656    1.5776
        z[9]   -2.5020   -1.1603   -0.5955   -0.1964    0.3876

Using new inference methods

So far we have used Turing’s support for external samplers to go beyond the capabilities of the wrappers. We want to use this support to employ a sampler not supported within Turing’s ecosystem yet. We will use the recently developed Micro-Cannoncial Hamiltonian Monte Carlo (MCHMC) sampler to showcase this. MCHMC[3,4] ((MCHMC’s GitHub)[https://github.com/JaimeRZP/MicroCanonicalHMC.jl]) is HMC sampler that uses one single Hamiltonian energy level to explore the whole parameter space. This is achieved by simulating the dynamics of a microcanonical Hamiltonian with an additional noise term to ensure ergodicity.

Using this as well as other inference methods outside the Turing ecosystem is as simple as executing the code shown below:

using MicroCanonicalHMC
# Create MCHMC sampler
n_adapts = 1_000 # adaptation steps
tev = 0.01 # target energy variance
mchmc = MCHMC(n_adapts, tev; adaptive=true)

# Sample
chain = sample(model, externalsampler(mchmc), 10_000)
[ Info: Tuning eps ⏳
[ Info: Tuning L ⏳
[ Info: Tuning sigma ⏳
Tuning:   0%|▏                                          |  ETA: 0:05:58
  ϵ:     1.3927390872378227
  L:     3.1622776601683795
  dE/d:  -0.029548135391599572


Tuning:   1%|▍                                          |  ETA: 0:04:01
  ϵ:     0.9317094122107559
  L:     4.099521413727326
  dE/d:  0.04383032249762664


Tuning: 100%|███████████████████████████████████████████| Time: 0:00:02
  ϵ:     0.8526942467997067
  L:     362.6202366439483
  dE/d:  0.020696978165621615
Chains MCMC chain (10000×11×1 Array{Float64, 3}):

Iterations        = 1:1:10000
Number of chains  = 1
Samples per chain = 10000
Wall duration     = 4.93 seconds
Compute duration  = 4.93 seconds
parameters        = θ, z[1], z[2], z[3], z[4], z[5], z[6], z[7], z[8], z[9]
internals         = lp

Summary Statistics
  parameters      mean       std      mcse    ess_bulk    ess_tail      rhat   ⋯
      Symbol   Float64   Float64   Float64     Float64     Float64   Float64   ⋯

           θ   -0.8986    1.1821    0.0392    878.8499   1166.2487    1.0020   ⋯
        z[1]    0.5761    0.6827    0.0203   1326.0004   1298.9931    1.0004   ⋯
        z[2]    0.1923    0.5767    0.0159   1379.5750   1402.3766    1.0003   ⋯
        z[3]   -0.2573    0.5456    0.0156   1334.4804   1315.0691    1.0201   ⋯
        z[4]   -0.1306    0.5553    0.0158   1312.1531   1346.7055    1.0028   ⋯
        z[5]   -0.6518    0.7063    0.0207   1339.7751   1429.2184    1.0003   ⋯
        z[6]    0.0973    0.5956    0.0162   1431.9578   1427.6850    1.0110   ⋯
        z[7]   -0.3181    0.6022    0.0151   1722.5672   1746.6297    1.0004   ⋯
        z[8]    0.2309    0.5565    0.0164   1234.4021   1302.1343    1.0021   ⋯
        z[9]   -0.7105    0.7566    0.0205   1612.4995   1723.7236    1.0002   ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

           θ   -2.8718   -1.8591   -0.9110    0.0057    1.3481
        z[1]   -0.4258    0.1033    0.4460    0.9266    2.2718
        z[2]   -0.8739   -0.1819    0.1540    0.5287    1.4882
        z[3]   -1.5226   -0.5513   -0.1931    0.0953    0.6926
        z[4]   -1.3484   -0.4328   -0.0940    0.2042    0.9273
        z[5]   -2.3325   -1.0502   -0.5233   -0.1471    0.4261
        z[6]   -1.0341   -0.2527    0.0657    0.4089    1.4380
        z[7]   -1.7045   -0.6529   -0.2547    0.0690    0.7422
        z[8]   -0.8018   -0.1163    0.1792    0.5334    1.5356
        z[9]   -2.5038   -1.1244   -0.5540   -0.1752    0.4070

The only requirement to work with externalsampler is that the provided sampler must implement the AbstractMCMC.jl-interface [INSERT LINK] for a model of type AbstractMCMC.LogDensityModel [INSERT LINK].

As previously stated, in order to use external sampling libraries within Turing they must follow the AbstractMCMC API. In this section, we will briefly dwell on what this entails. First and foremost, the sampler should be a subtype of AbstractMCMC.AbstractSampler. Second, the stepping function of the MCMC algorithm must be made defined using AbstractMCMC.step and follow the structure below:

# First step
function AbstractMCMC.step{T<:AbstractMCMC.AbstractSampler}(
    rng::Random.AbstractRNG,
    model::AbstractMCMC.LogDensityModel,
    spl::T;
    kwargs...,
)
    [...]
    return transition, sample
end

# N+1 step
function AbstractMCMC.step{T<:AbstractMCMC.AbstractSampler}(
    rng::Random.AbstractRNG,
    model::AbstractMCMC.LogDensityModel,
    sampler::T,
    state;
    kwargs...,
) 
    [...]
    return transition, sample
end

There are several characteristics to note in these functions:

  • There must be two step functions:

    • A function that performs the first step and initializes the sampler.
    • A function that performs the following steps and takes an extra input, state, which carries the initialization information.
  • The functions must follow the displayed signatures.

  • The output of the functions must be a transition, the current state of the sampler, and a sample, what is saved to the MCMC chain.

The last requirement is that the transition must be structured with a field θ, which contains the values of the parameters of the model for said transition. This allows Turing to seamlessly extract the parameter values at each step of the chain when bundling the chains. Note that if the external sampler produces transitions that Turing cannot parse, the bundling of the samples will be different or fail.

For practical examples of how to adapt a sampling library to the AbstractMCMC interface, the readers can consult the following libraries:

Back to top