Example: Logistic Regression with Random Effects
We will use the Seeds for demonstration. This example concerns the proportion of seeds that germinated on each of 21 plates. Here, we transform the data into a NamedTuple:
data = (
r = [10, 23, 23, 26, 17, 5, 53, 55, 32, 46, 10, 8, 10, 8, 23, 0, 3, 22, 15, 32, 3],
n = [39, 62, 81, 51, 39, 6, 74, 72, 51, 79, 13, 16, 30, 28, 45, 4, 12, 41, 30, 51, 7],
x1 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
x2 = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
N = 21,
)where r[i] is the number of germinated seeds and n[i] is the total number of the seeds on the $i$-th plate. Let $p_i$ be the probability of germination on the $i$-th plate. Then, the model is defined by:
\[\begin{aligned} b_i &\sim \text{Normal}(0, \tau) \\ \text{logit}(p_i) &= \alpha_0 + \alpha_1 x_{1 i} + \alpha_2 x_{2i} + \alpha_{12} x_{1i} x_{2i} + b_{i} \\ r_i &\sim \text{Binomial}(p_i, n_i) \end{aligned}\]
where $x_{1i}$ and $x_{2i}$ are the seed type and root extract of the $i$-th plate. The original BUGS program for the model is:
model
{
for( i in 1 : N ) {
r[i] ~ dbin(p[i],n[i])
b[i] ~ dnorm(0.0,tau)
logit(p[i]) <- alpha0 + alpha1 * x1[i] + alpha2 * x2[i] +
alpha12 * x1[i] * x2[i] + b[i]
}
alpha0 ~ dnorm(0.0, 1.0E-6)
alpha1 ~ dnorm(0.0, 1.0E-6)
alpha2 ~ dnorm(0.0, 1.0E-6)
alpha12 ~ dnorm(0.0, 1.0E-6)
tau ~ dgamma(0.001, 0.001)
sigma <- 1 / sqrt(tau)
}Modeling Language
Writing a Model in BUGS
Language References:
Implementations in C++ and R:
- JAGS and its user manual
- Nimble
Language Syntax:
Writing a Model in Julia
We provide a macro which allows users to write down model definitions using Julia:
model_def = @bugs begin
for i in 1:N
r[i] ~ dbin(p[i], n[i])
b[i] ~ dnorm(0.0, tau)
p[i] = logistic(alpha0 + alpha1 * x1[i] + alpha2 * x2[i] + alpha12 * x1[i] * x2[i] + b[i])
end
alpha0 ~ dnorm(0.0, 1.0E-6)
alpha1 ~ dnorm(0.0, 1.0E-6)
alpha2 ~ dnorm(0.0, 1.0E-6)
alpha12 ~ dnorm(0.0, 1.0E-6)
tau ~ dgamma(0.001, 0.001)
sigma = 1 / sqrt(tau)
endBUGS syntax carries over almost one-to-one to Julia, with minor exceptions. Modifications required are minor: curly braces are replaced with begin ... end blocks, and for loops do not require parentheses. In addition, Julia uses f(x) = ... as a shorthand for function definition, so BUGS' link function syntax is disallowed. Instead, user can call the inverse function of the link functions on the RHS expressions.
Support for Legacy BUGS Programs
The @bugs macro also works with original (R-like) BUGS syntax:
model_def = @bugs("""
model{
for( i in 1 : N ) {
r[i] ~ dbin(p[i],n[i])
b[i] ~ dnorm(0.0,tau)
logit(p[i]) <- alpha0 + alpha1 * x1[i] + alpha2 * x2[i] +
alpha12 * x1[i] * x2[i] + b[i]
}
alpha0 ~ dnorm(0.0,1.0E-6)
alpha1 ~ dnorm(0.0,1.0E-6)
alpha2 ~ dnorm(0.0,1.0E-6)
alpha12 ~ dnorm(0.0,1.0E-6)
tau ~ dgamma(0.001,0.001)
sigma <- 1 / sqrt(tau)
}
""", true, true)By default, @bugs will translate R-style variable names like a.b.c to a_b_c, user can pass false as the second argument to disable this. User can also pass true as the third argument if model { } enclosure is not present in the BUGS program. We still encourage users to write new programs using the Julia-native syntax, because of better debuggability and perks like syntax highlighting.
Basic Workflow
Compilation
Model definition and data are the two necessary inputs for compilation, with optional initializations. The compile function creates a BUGSModel that implements the LogDensityProblems.jl interface.
compile(model_def::Expr, data::NamedTuple)And with initializations:
compile(model_def::Expr, data::NamedTuple, initializations::NamedTuple)Using the model definition and data we defined earlier, we can compile the model:
model = compile(model_def, data)BUGSModel (parameters are in transformed (unconstrained) space, with dimension 26):
Model parameters:
alpha2
b[21], b[20], b[19], b[18], b[17], b[16], b[15], b[14], b[13], b[12], b[11], b[10], b[9], b[8], b[7], b[6], b[5], b[4], b[3], b[2], b[1]
tau
alpha12
alpha1
alpha0
Variable sizes and types:
b: size = (21,), type = Vector{Float64}
p: size = (21,), type = Vector{Float64}
n: size = (21,), type = Vector{Int64}
alpha2: type = Float64
sigma: type = Float64
alpha0: type = Float64
alpha12: type = Float64
N: type = Int64
tau: type = Float64
alpha1: type = Float64
r: size = (21,), type = Vector{Int64}
x1: size = (21,), type = Vector{Int64}
x2: size = (21,), type = Vector{Int64}Parameter values will be sampled from the prior distributions in the original space.
We can provide initializations:
initializations = (alpha = 1, beta = 1)compile(model_def, data, initializations)BUGSModel (parameters are in transformed (unconstrained) space, with dimension 26):
Model parameters:
alpha2
b[21], b[20], b[19], b[18], b[17], b[16], b[15], b[14], b[13], b[12], b[11], b[10], b[9], b[8], b[7], b[6], b[5], b[4], b[3], b[2], b[1]
tau
alpha12
alpha1
alpha0
Variable sizes and types:
b: size = (21,), type = Vector{Float64}
p: size = (21,), type = Vector{Float64}
n: size = (21,), type = Vector{Int64}
alpha2: type = Float64
sigma: type = Float64
alpha0: type = Float64
alpha12: type = Float64
N: type = Int64
tau: type = Float64
alpha1: type = Float64
r: size = (21,), type = Vector{Int64}
x1: size = (21,), type = Vector{Int64}
x2: size = (21,), type = Vector{Int64}
We can also initialize parameters after compilation:
initialize!(model, initializations)BUGSModel (parameters are in transformed (unconstrained) space, with dimension 26):
Model parameters:
alpha2
b[21], b[20], b[19], b[18], b[17], b[16], b[15], b[14], b[13], b[12], b[11], b[10], b[9], b[8], b[7], b[6], b[5], b[4], b[3], b[2], b[1]
tau
alpha12
alpha1
alpha0
Variable sizes and types:
b: size = (21,), type = Vector{Float64}
p: size = (21,), type = Vector{Float64}
n: size = (21,), type = Vector{Int64}
alpha2: type = Float64
sigma: type = Float64
alpha0: type = Float64
alpha12: type = Float64
N: type = Int64
tau: type = Float64
alpha1: type = Float64
r: size = (21,), type = Vector{Int64}
x1: size = (21,), type = Vector{Int64}
x2: size = (21,), type = Vector{Int64}
initialize! also accepts a flat vector. In this case, the vector should have the same length as the number of parameters, but values can be in transformed space:
initialize!(model, rand(26))BUGSModel (parameters are in transformed (unconstrained) space, with dimension 26):
Model parameters:
alpha2
b[21], b[20], b[19], b[18], b[17], b[16], b[15], b[14], b[13], b[12], b[11], b[10], b[9], b[8], b[7], b[6], b[5], b[4], b[3], b[2], b[1]
tau
alpha12
alpha1
alpha0
Variable sizes and types:
b: size = (21,), type = Vector{Float64}
p: size = (21,), type = Vector{Float64}
n: size = (21,), type = Vector{Int64}
alpha2: type = Float64
sigma: type = Float64
alpha0: type = Float64
alpha12: type = Float64
N: type = Int64
tau: type = Float64
r: size = (21,), type = Vector{Int64}
alpha1: type = Float64
x1: size = (21,), type = Vector{Int64}
x2: size = (21,), type = Vector{Int64}
Inference
For gradient-based inference, compile your model with an AD backend using the adtype parameter (see Automatic Differentiation for details). We use AdvancedHMC.jl:
# Compile with gradient support
model = compile(model_def, data; adtype=AutoReverseDiff(compile=true))
n_samples, n_adapts = 2000, 1000
D = LogDensityProblems.dimension(model); initial_θ = rand(D)
samples_and_stats = AbstractMCMC.sample(
model,
NUTS(0.8),
n_samples;
chain_type = Chains,
n_adapts = n_adapts,
init_params = initial_θ,
discard_initial = n_adapts,
progress = false
)
describe(samples_and_stats)[ Info: Found initial step size 0.2
Chains MCMC chain (2000×40×1 Array{Real, 3}):
Iterations = 1001:1:3000
Number of chains = 1
Samples per chain = 2000
parameters = tau, alpha12, alpha2, alpha1, alpha0, b[21], b[20], b[19], b[18], b[17], b[16], b[15], b[14], b[13], b[12], b[11], b[10], b[9], b[8], b[7], b[6], b[5], b[4], b[3], b[2], b[1], sigma
internals = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size, is_adapt
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ⋯
Symbol Float64 Float64 Float64 Real Float64 Float64 ⋯
tau 40.5870 77.5883 10.3204 81.1993 72.9377 1.0074 ⋯
alpha12 -0.8169 0.4070 0.0129 992.1699 1092.5879 0.9995 ⋯
alpha2 1.3457 0.2521 0.0079 1001.3854 1083.7476 1.0003 ⋯
alpha1 0.0811 0.3069 0.0101 944.8872 903.2342 1.0009 ⋯
alpha0 -0.5494 0.1819 0.0057 1006.4929 765.2733 0.9997 ⋯
b[21] -0.0422 0.2774 0.0059 2293.7414 1065.2162 1.0047 ⋯
b[20] 0.2004 0.2510 0.0117 538.0399 791.9514 0.9999 ⋯
b[19] -0.0208 0.2308 0.0055 1787.0056 988.0991 1.0008 ⋯
b[18] 0.0455 0.2326 0.0063 1530.0132 860.6713 0.9996 ⋯
b[17] -0.1892 0.2888 0.0107 769.3734 975.1411 1.0006 ⋯
b[16] -0.1164 0.2815 0.0087 1231.2027 810.5283 0.9996 ⋯
b[15] 0.2195 0.2607 0.0129 471.6077 1068.6300 1.0040 ⋯
b[14] -0.1385 0.2521 0.0070 1378.6153 966.5439 1.0004 ⋯
b[13] -0.0680 0.2472 0.0056 2120.2268 1171.4995 0.9996 ⋯
b[12] 0.1138 0.2679 0.0095 930.9378 1033.7019 1.0037 ⋯
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱
1 column and 12 rows omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
tau 3.2681 8.2023 15.3543 31.9706 302.3962
alpha12 -1.6149 -1.0799 -0.8178 -0.5471 -0.0260
alpha2 0.8636 1.1765 1.3448 1.5131 1.8638
alpha1 -0.5511 -0.1110 0.0838 0.2797 0.6727
alpha0 -0.9097 -0.6694 -0.5504 -0.4334 -0.1894
b[21] -0.6625 -0.1862 -0.0261 0.1142 0.4890
b[20] -0.2054 0.0258 0.1662 0.3472 0.7826
b[19] -0.5044 -0.1437 -0.0231 0.1036 0.4608
b[18] -0.4023 -0.0927 0.0335 0.1746 0.5430
b[17] -0.8794 -0.3282 -0.1499 -0.0069 0.2965
b[16] -0.7577 -0.2576 -0.0837 0.0492 0.3976
b[15] -0.1724 0.0404 0.1697 0.3620 0.8324
b[14] -0.7103 -0.2791 -0.1109 0.0217 0.3085
b[13] -0.5865 -0.2077 -0.0502 0.0647 0.4168
b[12] -0.3397 -0.0491 0.0770 0.2526 0.7179
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
12 rows omittedThis is consistent with the result in the OpenBUGS seeds example.
Next Steps
- Automatic Differentiation - AD backends and configuration
- Evaluation Modes - Different log density computation modes
- Auto-Marginalization - Gradient-based inference with discrete variables
- Parallel Sampling - Multi-threaded and distributed sampling
More Examples
We have transcribed all the examples from the first volume of the BUGS Examples (original and transcribed). All programs and data are included, and can be compiled using the steps described in the tutorial above.