Diagnostics

MCMCChains.discretediagMethod
discretediag(chains::Chains{<:Real}; sections, frac, method, nsim)

Discrete diagnostic where method can be [:weiss, :hangartner, :DARBOOT, MCBOOT, :billinsgley, :billingsleyBOOT].

source
MCMCChains.weissMethod
weiss(X::AbstractMatrix)

Assess the convergence of the MCMC chains with the Weiss procedure.

It computes $\frac{X^2}{c}$ and evaluates a p-value from the $\chi^2$ distribution with $(|R| − 1)(s − 1)$ degrees of freedom.

source
MCMCChains.gewekediagMethod
gewekediag(x::Vector{<:Real}; first, last, etype)
gewekediag(chains::Chains; sections, first, last, etype, kwargs...)

Geweke diagnostic.

source
MCMCChains.heideldiagMethod
heideldiag(x::Vector{<:Real}; alpha, eps, etype, start, args...)
heideldiag(chains::Chains; sections, alpha, eps, etype, args...)

Heidelberger and Welch Diagnostic.

source
MCMCChains.rafterydiagMethod
rafterydiag(x::Vector{<:Real}; q, r, s, eps, range)
rafterydiag(chains::Chains; sections, q, r, s, eps)

Raftery and Lewis Diagnostic.

source
MCMCChains.rstarMethod
rstar([rng ,]classif::Supervised, chains::Chains; kwargs...)
rstar([rng ,]classif::Supervised, x::AbstractMatrix, y::AbstractVector; kwargs...)

Compute the $R^*$ convergence diagnostic of MCMC.

This implementation is an adaption of Algorithm 1 & 2, described by [LambertVehtari2020]. Note that the correctness of the statistic depends on the convergence of the classifier used internally in the statistic. You can inspect the training of the classifier by adjusting the verbosity level.

Keyword Arguments

  • subset = 0.8 ... Subset used to train the classifier, i.e. 0.8 implies 80% of the samples are used.
  • iterations = 10 ... Number of iterations used to estimate the statistic. If the classifier is not probabilistic, i.e. does not return class probabilities, it is advisable to use a value of one.
  • verbosity = 0 ... Verbosity level used during fitting of the classifier.

Example

using MLJModels

XGBoost = @load XGBoostClassifier verbosity=0
chn = Chains(fill(4, 100, 2, 3))

Rs = rstar(XGBoost(), chn; iterations=20)
R = round(mean(Rs); digits=0)
source
MCMCChains.BDAESSMethodType
BDAESSMethod <: AbstractESSMethod

The BDAESSMethod uses a standard algorithm for estimating the effective sample size of MCMC chains.

It is is based on the discussion by [Vehtari2019] and uses the variogram estimator of the autocorrelation function discussed in [Gelman2013].

source
MCMCChains.ESSMethodType
ESSMethod <: AbstractESSMethod

The ESSMethod uses a standard algorithm for estimating the effective sample size of MCMC chains.

It is is based on the discussion by [Vehtari2019] and uses the biased estimator of the autocovariance, as discussed by [Geyer1992]. In contrast to Geyer, the divisor n - 1 is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.

source
MCMCChains.FFTESSMethodType
FFTESSMethod <: AbstractESSMethod

The FFTESSMethod uses a standard algorithm for estimating the effective sample size of MCMC chains.

It is is based on the discussion by [Vehtari2019] and uses the biased estimator of the autocovariance, as discussed by [Geyer1992]. In contrast to Geyer, the divisor n - 1 is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.

In contrast to ESSMethod, this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.

source
MCMCChains.copyto_split!Method
copyto_split!(out::AbstractMatrix, x::AbstractMatrix)

Copy the elements of matrix x to matrix out, in which each column of x is split.

If the number of rows in x is odd, the sample at index (size(x, 1) + 1) / 2 is dropped.

source
MCMCChains.essMethod
ess(chains::Chains; kwargs...)

Estimate the effective sample size and the potential scale reduction.

source