Diagnostics
MCMCChains.discretediag
— Methoddiscretediag(chains::Chains{<:Real}; sections, frac, method, nsim)
Discrete diagnostic where method
can be [:weiss, :hangartner, :DARBOOT, MCBOOT, :billinsgley, :billingsleyBOOT]
.
MCMCChains.weiss
— Methodweiss(X::AbstractMatrix)
Assess the convergence of the MCMC chains with the Weiss procedure.
It computes $\frac{X^2}{c}$ and evaluates a p-value from the $\chi^2$ distribution with $(|R| − 1)(s − 1)$ degrees of freedom.
MCMCChains.gelmandiag
— Methodgelmandiag(chains::AbstractArray{<:Real,3}; kwargs...)
Gelman, Rubin and Brooks diagnostics.
MCMCChains.gelmandiag_multivariate
— Methodgelmandiag_multivariate(chains::AbstractArray{<:Real,3}; kwargs...)
Multivariate Gelman, Rubin and Brooks diagnostics.
MCMCChains.gewekediag
— Methodgewekediag(x::Vector{<:Real}; first, last, etype)
gewekediag(chains::Chains; sections, first, last, etype, kwargs...)
Geweke diagnostic.
MCMCChains.heideldiag
— Methodheideldiag(x::Vector{<:Real}; alpha, eps, etype, start, args...)
heideldiag(chains::Chains; sections, alpha, eps, etype, args...)
Heidelberger and Welch Diagnostic.
MCMCChains.rafterydiag
— Methodrafterydiag(x::Vector{<:Real}; q, r, s, eps, range)
rafterydiag(chains::Chains; sections, q, r, s, eps)
Raftery and Lewis Diagnostic.
MCMCChains.rstar
— Methodrstar([rng ,]classif::Supervised, chains::Chains; kwargs...)
rstar([rng ,]classif::Supervised, x::AbstractMatrix, y::AbstractVector; kwargs...)
Compute the $R^*$ convergence diagnostic of MCMC.
This implementation is an adaption of Algorithm 1 & 2, described by [LambertVehtari2020]. Note that the correctness of the statistic depends on the convergence of the classifier used internally in the statistic. You can inspect the training of the classifier by adjusting the verbosity level.
Keyword Arguments
subset = 0.8
... Subset used to train the classifier, i.e. 0.8 implies 80% of the samples are used.iterations = 10
... Number of iterations used to estimate the statistic. If the classifier is not probabilistic, i.e. does not return class probabilities, it is advisable to use a value of one.verbosity = 0
... Verbosity level used during fitting of the classifier.
Example
using MLJModels
XGBoost = @load XGBoostClassifier verbosity=0
chn = Chains(fill(4, 100, 2, 3))
Rs = rstar(XGBoost(), chn; iterations=20)
R = round(mean(Rs); digits=0)
MCMCChains.BDAESSMethod
— TypeBDAESSMethod <: AbstractESSMethod
The BDAESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by [Vehtari2019] and uses the variogram estimator of the autocorrelation function discussed in [Gelman2013].
MCMCChains.ESSMethod
— TypeESSMethod <: AbstractESSMethod
The ESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by [Vehtari2019] and uses the biased estimator of the autocovariance, as discussed by [Geyer1992]. In contrast to Geyer, the divisor n - 1
is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.
MCMCChains.FFTESSMethod
— TypeFFTESSMethod <: AbstractESSMethod
The FFTESSMethod
uses a standard algorithm for estimating the effective sample size of MCMC chains.
It is is based on the discussion by [Vehtari2019] and uses the biased estimator of the autocovariance, as discussed by [Geyer1992]. In contrast to Geyer, the divisor n - 1
is used in the estimation of the autocovariance to obtain the unbiased estimator of the variance for lag 0.
In contrast to ESSMethod
, this method uses fast Fourier transforms (FFTs) for estimating the autocorrelation.
MCMCChains.copyto_split!
— Methodcopyto_split!(out::AbstractMatrix, x::AbstractMatrix)
Copy the elements of matrix x
to matrix out
, in which each column of x
is split.
If the number of rows in x
is odd, the sample at index (size(x, 1) + 1) / 2
is dropped.
MCMCChains.ess
— Methodess(chains::Chains; kwargs...)
Estimate the effective sample size and the potential scale reduction.
- LambertVehtari2020Lambert & Vehtari (2020). $R^*$: A robust MCMC convergence diagnostic with uncertainty using gradient-boosted machines. arXiv preprint https://arxiv.org/abs/2003.07900.
- Gelman2013Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.
- Geyer1992Geyer, C. J. (1992). Practical Markov Chain Monte Carlo. Statistical Science, 473-483. https://projecteuclid.org/euclid.ss/1177011137.
- Vehtari2019Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-normalization, folding, and localization: An improved $\widehat {R}$ for assessing convergence of MCMC. Bayesian Analysis. https://arxiv.org/pdf/1903.08008.pdf.