GeneralisedFilters
Installation
In the julia REPL:
] add GeneralisedFiltersDocumentation
GeneralisedFilters provides implementations of various filtering and smoothing algorithms for state-space models (SSMs). The goal of the package is to provide a modular and extensible framework for implementing advanced algorithms including Rao-Blackwellised particle filters, two-filter smoothers, and particle Gibbs/conditional SMC. Performance is a primary focus of this work, with type stability, GPU-acceleration, and efficient history storage being key design goals.
Interface
GeneralisedFilters.AbstractLikelihood — Type
AbstractLikelihoodAbstract type for backward likelihood representations used in smoothing and ancestor sampling.
Subtypes represent the predictive likelihood p(y | x) in different forms depending on the state space structure (continuous Gaussian vs discrete).
GeneralisedFilters.AncestorCallback — Type
AncestorCallbackA callback for sparse ancestry storage, which preallocates and returns a populated ParticleTree object.
GeneralisedFilters.AuxiliaryResampler — Type
AuxiliaryResamplerA resampling scheme for multistage particle resampling with auxiliary weights
GeneralisedFilters.BackwardDiscretePredictor — Type
BackwardDiscretePredictor <: AbstractBackwardPredictorAlgorithm to recursively compute the backward likelihood βt(i) = p(y{t:T} | x_t = i) for discrete state space models.
All computations are performed in log-space using logsumexp for numerical stability. The resulting DiscreteLikelihood stores log β values internally.
GeneralisedFilters.BackwardInformationPredictor — Type
BackwardInformationPredictor(; jitter=nothing, initial_jitter=nothing)An algorithm to recursively compute the predictive likelihood p(y{t:T} | xt) of a linear Gaussian state space model in information form.
Fields
jitter::Union{Nothing, Real}: Optional value added to the precision matrix Ω after the backward predict step to improve numerical stability. Ifnothing, no jitter is applied.initial_jitter::Union{Nothing, Real}: Optional value added to the precision matrix Ω at initialization to improve numerical stability.
This implementation is based on https://arxiv.org/pdf/1505.06357
GeneralisedFilters.DenseAncestorCallback — Type
DenseAncestorCallbackA callback for dense ancestry storage, which fills a DenseParticleContainer.
GeneralisedFilters.DiscreteFilter — Type
DiscreteFilter <: AbstractFilterForward filtering algorithm for discrete (finite) state space models.
Computes the filtered distribution πt(i) = p(xt = i | y_{1:t}) recursively.
GeneralisedFilters.DiscreteLikelihood — Type
DiscreteLikelihood <: AbstractLikelihoodA container representing the backward likelihood β_t(i) = p(y | x = i) for discrete state spaces, stored in log-space for numerical stability.
This representation is used in backward filtering algorithms for discrete SSMs (HMMs) and Rao-Blackwellised particle filtering with discrete inner states.
Fields
log_β::VT: Vector of log backward likelihoods, wherelog_β[i] = log p(y | x = i)
See also
BackwardDiscretePredictor: Algorithm that uses this representation
GeneralisedFilters.DiscreteSmoother — Type
DiscreteSmoother <: AbstractSmootherA forward-backward smoother for discrete state space models.
GeneralisedFilters.HierarchicalState — Type
A container for a sampled state from a hierarchical SSM, with separation between the outer and inner dimensions. Note this differs from a RBState in the the inner state is a sample rather than a conditional distribution.
GeneralisedFilters.InformationLikelihood — Type
InformationLikelihood <: AbstractLikelihoodA container representing an unnormalized Gaussian likelihood p(y | x) in information form, parameterized by natural parameters (λ, Ω).
The unnormalized log-likelihood is given by: log p(y | x) ∝ λ'x - (1/2)x'Ωx
This representation is particularly useful in backward filtering algorithms and Rao-Blackwellised particle filtering, where it represents the predictive likelihood p(y{t:T} | xt) conditioned on future observations.
Fields
λ::λT: The natural parameter vector (information vector)Ω::ΩT: The natural parameter matrix (information/precision matrix)
See also
natural_params: Extract the natural parameters (λ, Ω)BackwardInformationPredictor: Algorithm that uses this representation
GeneralisedFilters.KalmanFilter — Type
KalmanFilter(; jitter=nothing)Kalman filter for linear Gaussian state space models.
Fields
jitter::Union{Nothing, Real}: Optional value added to the covariance matrix after the update step to improve numerical stability. Ifnothing, no jitter is applied.
GeneralisedFilters.KalmanGradientCache — Type
KalmanGradientCacheCache of intermediate values from a Kalman filter update step for gradient computation.
Fields
μ_pred: Predicted mean x̂_{n|n-1}Σ_pred: Predicted covariance P_{n|n-1}μ_filt: Filtered mean x̂_{n|n}Σ_filt: Filtered covariance P_{n|n}S: Innovation covarianceK: Kalman gainz: Innovation (y - H*μ_pred - c)I_KH: I - K*H
GeneralisedFilters.Particle — Type
ParticleA container representing a single particle in a particle filter distribution, composed of a weighted sampled (stored as a log weight) and its ancestor index.
GeneralisedFilters.ParticleDistribution — Type
ParticleDistributionA container for particle filters which composes a collection of weighted particles (with their ancestories) into a distibution-like object.
Fields
particles::VT: Vector of weighted particlesll_baseline::WT: Baseline for computing log-likelihood increment. A scalar that caches the unnormalized logsumexp of weights before update (for standard PF/guided filters) or a modified value that includes APF first-stage correction (for auxiliary PF).
GeneralisedFilters.ParticleTree — Type
ParticleTreeA sparse container for particle ancestry, which tracks the lineage of the filtered draws.
Reference
Jacob, P., Murray L., & Rubenthaler S. (2015). Path storage in the particle filter doi:10.1007/s11222-013-9445-x
GeneralisedFilters.RBState — Type
RBStateA container representing a single state with a Rao-Blackwellised component. This differs from a HierarchicalState which contains a sample of the conditionally analytical state rather than the distribution itself.
Fields
x::XT: The sampled state componentz::ZT: The Rao-Blackwellised distribution component
GeneralisedFilters.ResamplerCallback — Type
ResamplerCallbackA callback which follows the resampling indices over the filtering algorithm. This is more of a debug tool and visualizer for various resapmling algorithms.
GeneralisedFilters.SRKalmanFilter — Type
SRKalmanFilter(; jitter=nothing)Square-root Kalman filter for linear Gaussian state space models.
Uses QR factorization to propagate the Cholesky factor of the covariance matrix directly, avoiding the numerical instabilities associated with forming and subtracting full covariance matrices.
Fields
jitter::Union{Nothing, Real}: Optional value added to the covariance matrix after the update step to improve numerical stability. Ifnothing, no jitter is applied.
Algorithm
The SRKF represents the covariance as Σ = U' * U where U is upper triangular.
Predict Step: Given filtered state:
- Form matrix
A = [[√Q], [A*U']] - QR factorize to obtain
U_new(predicted square-root covariance)
Update Step: Given predicted state and observation y:
- Form matrix
A = [[√R, H*U'], [0, U']] - QR factorize:
Q, B = qr(A)whereBis upper triangular - Extract
U_new(posterior square-root covariance) from bottom-right block - Compute Kalman gain from the factorization components
See also: KalmanFilter
GeneralisedFilters.TypelessBaseline — Type
TypelessBaselineA lazy promotion for the computation of log-likelihood baslines given a collection of unweighted particles.
GeneralisedFilters.TypelessZero — Type
TypelessZeroA lazy promotion for uninitialized particle weights whos type is not yet known at the first simulation of a particle filter.
GeneralisedFilters._correct_cholesky_sign — Method
_correct_cholesky_sign(R)Ensure the diagonal of an upper triangular matrix is positive.
QR factorization produces an upper triangular R that may have negative diagonal elements. For use as a Cholesky factor, the diagonal must be positive. This function multiplies rows by -1 as needed, in an StaticArray-compatible fashion.
GeneralisedFilters.ancestor_weight — Method
ancestor_weight(particle, dyn, algo, iter, ref_state; kwargs...)Compute the full (unnormalized) log backward sampling weight for a particle.
This is a convenience function that combines the particle's filtering log-weight with the future conditional density:
\[\log \tilde{w}_{t|T}^{(i)} = \log w_t^{(i)} + \log p(x_{t+1:T}^*, y_{t+1:T} \mid x_{1:t}^{(i)}, y_{1:t})\]
Arguments
particle::Particle: The candidate particle at time tdyn: The latent dynamics modelalgo: The filtering algorithmiter::Integer: The time step tref_state: The reference trajectory state at time t+1
Returns
The log backward sampling weight (unnormalized).
See also: future_conditional_density
GeneralisedFilters.backward_gradient_predict — Method
backward_gradient_predict(∂μ_pred, ∂Σ_pred, A)Propagate gradients backward through the Kalman predict step (predicted → previous filtered).
This implements equations 10-11 from Parellier et al.
Arguments
∂μ_pred: Gradient of loss w.r.t. predicted mean ∂L/∂x̂_{n|n-1}∂Σ_pred: Gradient of loss w.r.t. predicted covariance ∂L/∂P_{n|n-1}A: Dynamics matrix at this time step
Returns
A tuple (∂μ_filt_prev, ∂Σ_filt_prev) containing gradients w.r.t. the previous filtered state.
GeneralisedFilters.backward_gradient_update — Method
backward_gradient_update(∂μ_filt, ∂Σ_filt, cache, H, R)Propagate gradients backward through the Kalman update step (filtered → predicted).
This implements equations 8-9 from Parellier et al., computing the gradients with respect to the predicted state from the gradients with respect to the filtered state.
Arguments
∂μ_filt: Gradient of loss w.r.t. filtered mean ∂L/∂x̂_{n|n}∂Σ_filt: Gradient of loss w.r.t. filtered covariance ∂L/∂P_{n|n}cache:KalmanGradientCachefrom the forward passH: Observation matrix at this time stepR: Observation noise covariance at this time step
Returns
A tuple (∂μ_pred, ∂Σ_pred) containing gradients w.r.t. the predicted state.
GeneralisedFilters.backward_initialise — Function
backward_initialise(rng, obs, algo, iter, observation; kwargs...)Initialize the backward likelihood at the final time step T.
This creates an initial representation of p(yT | xT), the likelihood of the final observation given the state at time T.
Arguments
rng: Random number generatorobs: The observation process modelalgo: The backward predictor algorithmiter::Integer: The final time step Tobservation: The observation y_T at time T
Returns
A representation of the likelihood p(yT | xT).
Implementations
- Linear Gaussian (
LinearGaussianObservationProcess,BackwardInformationPredictor): ReturnsInformationLikelihoodwith λ = H'R⁻¹(y-c) and Ω = H'R⁻¹H
See also: backward_predict, backward_update
GeneralisedFilters.backward_initialise — Method
backward_initialise(rng, obs, algo, iter, y; kwargs...)Initialise a backward predictor at time T with observation y, forming the likelihood p(yT | xT).
GeneralisedFilters.backward_initialise — Method
backward_initialise(rng, obs, algo::BackwardDiscretePredictor, iter, y; kwargs...)Initialize the backward likelihood at time T with observation y_T.
Returns a DiscreteLikelihood where log βT(i) = log p(yT | x_T = i).
GeneralisedFilters.backward_predict — Function
backward_predict(rng, dyn, algo, iter, state; kwargs...)Perform a backward prediction step, propagating the likelihood backward through the dynamics.
Given a representation of p(y{t+1:T} | x{t+1}), compute p(y{t+1:T} | xt) by marginalizing over the transition:
p(y_{t+1:T} | x_t) = ∫ p(x_{t+1} | x_t) p(y_{t+1:T} | x_{t+1}) dx_{t+1}Arguments
rng: Random number generatordyn: The latent dynamics modelalgo: The backward predictor algorithmiter::Integer: The time step t (predicting from t to t+1)state: The backward likelihood p(y{t+1:T} | x{t+1})
Returns
The backward likelihood p(y{t+1:T} | xt).
Implementations
- Linear Gaussian (
LinearGaussianLatentDynamics,BackwardInformationPredictor): UpdatesInformationLikelihoodusing the backward information filter equations.
See also: backward_initialise, backward_update
GeneralisedFilters.backward_predict — Method
backward_predict(rng, dyn, algo::BackwardDiscretePredictor, iter, state; kwargs...)Backward prediction step: marginalize through dynamics without incorporating observations.
Takes p(y{t+1:T} | x{t+1}) and computes p(y{t+1:T} | xt) by marginalizing over x{t+1}: p(y{t+1:T} | xt = i) = Σj P{ij} p(y{t+1:T} | x_{t+1} = j)
In log-space: log p(y{t+1:T} | xt = i) = logsumexpj(log P{ij} + log p(y{t+1:T} | x{t+1} = j))
GeneralisedFilters.backward_predict — Method
backward_predict(rng, dyn, algo, iter, state; prev_outer=nothing, next_outer=nothing, kwargs...)Perform a backward prediction step to compute p(y{t+1:T} | xt) from p(y{t:T} | x{t+1}).
GeneralisedFilters.backward_smooth — Function
backward_smooth(dyn, algo, step, filtered, smoothed_next; predicted=nothing, kwargs...)Perform a single backward smoothing step, computing the smoothed distribution at time step given the filtered distribution at time step and the smoothed distribution at time step+1.
For efficiency, the predicted distribution p(x_{t+1} | y_{1:t}) should be provided via the predicted keyword argument if available from the forward pass. If omitted, it will be recomputed internally, which duplicates work.
This implements the backward recursion of the forward-backward (Rauch-Tung-Striebel) smoother:
\[p(x_t | y_{1:T}) \propto p(x_t | y_{1:t}) \int \frac{p(x_{t+1} | x_t) \, p(x_{t+1} | y_{1:T})}{p(x_{t+1} | y_{1:t})} \, dx_{t+1}\]
Arguments
dyn: The latent dynamics modelalgo: The filtering algorithm (determines the state representation)step::Integer: The time index t of the filtered statefiltered: The filtered distribution $p(x_t | y_{1:t})$smoothed_next: The smoothed distribution $p(x_{t+1} | y_{1:T})$predicted=nothing: The predicted distribution $p(x_{t+1} | y_{1:t})$. Should be provided if available; ifnothing, it is recomputed fromfiltered.
Returns
The smoothed distribution $p(x_t | y_{1:T})$.
Implementations
- Linear Gaussian (
LinearGaussianLatentDynamics,KalmanFilter): ReturnsMvNormalusing the RTS equations with smoothing gain $G = \Sigma_{\text{filt}} A^\top \Sigma_{\text{pred}}^{-1}$
See also: two_filter_smooth, smooth
GeneralisedFilters.backward_smooth — Method
backward_smooth(dyn, algo::DiscreteFilter, step, filtered, smoothed_next; predicted, kwargs...)Perform one step of backward smoothing for discrete state spaces.
Computes γt(i) = πt(i) * Σj [P{ij} * γ{t+1}(j) / π̂{t+1}(j)]
where:
- π_t(i) is the filtered distribution at time t
- γ_{t+1}(j) is the smoothed distribution at time t+1
- π̂_{t+1}(j) is the predicted distribution at time t+1
- P_{ij} is the transition probability from state i to state j
GeneralisedFilters.backward_update — Function
backward_update(obs, algo, iter, state, observation; kwargs...)Incorporate an observation into the backward likelihood.
Given p(y{t+1:T} | xt), incorporate observation yt to obtain p(y{t:T} | x_t):
p(y_{t:T} | x_t) = p(y_t | x_t) × p(y_{t+1:T} | x_t)Arguments
obs: The observation process modelalgo: The backward predictor algorithmiter::Integer: The time step tstate: The backward likelihood p(y{t+1:T} | xt)observation: The observation y_t at time t
Returns
The updated backward likelihood p(y{t:T} | xt).
Implementations
- Linear Gaussian (
LinearGaussianObservationProcess,BackwardInformationPredictor): Adds observation information: λ += H'R⁻¹(y-c), Ω += H'R⁻¹H
See also: backward_initialise, backward_predict
GeneralisedFilters.backward_update — Method
backward_update(obs, algo, iter, state, y; kwargs...)Incorporate an observation y at time t to compute p(y{t:T} | xt) from p(y{t+1:T} | xt).
GeneralisedFilters.backward_update — Method
backward_update(obs, algo::BackwardDiscretePredictor, iter, state, y; kwargs...)Incorporate observation y_t into the backward likelihood.
Updates: log β(i) += log p(yt | xt = i)
This transforms p(y{t+1:T} | xt) into βt = p(y{t:T} | x_t).
GeneralisedFilters.compute_marginal_predictive_likelihood — Method
compute_marginal_predictive_likelihood(forward_dist::AbstractVector, backward_dist::DiscreteLikelihood)Compute the marginal predictive likelihood p(y{t+1:T} | y{1:t}) for discrete states.
Given a predicted filtering distribution π{t+1}(i) = p(x{t+1} = i | y{1:t}) and backward likelihood β{t+1}(i) = p(y{t+1:T} | x{t+1} = i), computes:
p(y_{t+1:T} | y_{1:t}) = Σ_i π_{t+1}(i) * β_{t+1}(i)All computations are performed in log-space for numerical stability.
GeneralisedFilters.compute_marginal_predictive_likelihood — Method
compute_marginal_predictive_likelihood(forward_dist, backward_dist)Compute the marginal predictive likelihood p(y{t:T} | y{1:t-1}) given a one-step predicted filtering distribution p(x{t+1} | y{1:t}) and a backward predictive likelihood p(y{t+1:T} | x{t+1}).
This Gaussian implementation is based on Lemma 1 of https://arxiv.org/pdf/1505.06357
GeneralisedFilters.filter — Method
filter([rng,] model, algo, observations; callback=nothing, kwargs...)Run a filtering algorithm on a state-space model.
Performs sequential Bayesian inference by iterating through observations, calling predict and update at each time step.
Arguments
rng::AbstractRNG: Random number generator (optional, defaults todefault_rng())model::AbstractStateSpaceModel: The state-space model to filteralgo::AbstractFilter: The filtering algorithm (e.g.,KalmanFilter(),BootstrapFilter(N))observations::AbstractVector: Vector of observations y₁:ₜ
Keyword Arguments
callback: Optional callback for recording intermediate stateskwargs...: Additional arguments passed to model parameter functions
Returns
A tuple (state, log_likelihood) where:
state: The final filtered state (algorithm-dependent type)log_likelihood: The total log-marginal likelihood log p(y₁:ₜ)
Example
model = create_homogeneous_linear_gaussian_model(μ0, Σ0, A, b, Q, H, c, R)
state, ll = filter(model, KalmanFilter(), observations)GeneralisedFilters.future_conditional_density — Method
future_conditional_density(dyn, algo, iter, state, ref_state; kwargs...)Compute the log conditional density of the future trajectory given the present state:
\[\log p(x_{t+1:T}, y_{t+1:T} \mid x_{1:t}, y_{1:t})\]
up to an additive constant that does not depend on $x_t$.
This function is the key computational primitive for backward simulation (BS) and ancestor sampling (AS) algorithms. The full backward sampling weight combines this with the filtering weight:
\[\tilde{w}_{t|T}^{(i)} \propto w_t^{(i)} \cdot p(x_{t+1:T}^*, y_{t+1:T} \mid x_{1:t}^{(i)}, y_{1:t})\]
Standard (Markov) Case
For Markovian models, the future conditional density factorizes as:
\[p(x_{t+1:T}, y_{t+1:T} \mid x_{1:t}, y_{1:t}) = f(x_{t+1} \mid x_t) \cdot p(y_{t+1:T} \mid x_{t+1:T})\]
The second factor is constant across candidate ancestors (since $x_{t+1:T}^*$ is fixed), so this function returns only the transition density:
\[\texttt{future\_conditional\_density} = \log f(x_{t+1}^* \mid x_t)\]
Rao-Blackwellised Case
For hierarchical models with Rao-Blackwellisation, the marginal outer state process is non-Markov due to the marginalized inner state. The future conditional density becomes:
\[p(u_{t+1:T}, y_{t+1:T} \mid u_{1:t}, y_{1:t}) \propto f(u_{t+1} \mid u_t) \cdot p(y_{t+1:T} \mid u_{1:T}, y_{1:t})\]
where $u$ denotes the outer (sampled) state. Unlike the Markov case, the second factor depends on the candidate ancestor through $u_{1:t}$. This is computed via the two-filter formula using the forward filtering distribution and backward predictive likelihood.
Arguments
dyn: The latent dynamics modelalgo: The filtering algorithmiter::Integer: The time step tstate: The candidate state at time t (contains filtering distribution for RB case)ref_state: The reference trajectory state at time t+1
Returns
The log future conditional density (up to additive constants independent of $x_t$).
Implementations
- Generic (
LatentDynamics,AbstractFilter): Returnslogdensity(dyn, iter, state, ref_state) - Rao-Blackwellised (
HierarchicalDynamics,RBPF): Combines outer transition density with marginal predictive likelihood. Theref_state.zmust be anAbstractLikelihood(InformationLikelihoodfor Gaussian inner states,DiscreteLikelihoodfor discrete inner states).
See also: compute_marginal_predictive_likelihood, BackwardInformationPredictor, BackwardDiscretePredictor
GeneralisedFilters.gradient_A — Method
gradient_A(∂μ_pred, ∂Σ_pred, μ_prev, Σ_prev, A)Compute gradient of NLL w.r.t. dynamics matrix A.
Derived via chain rule through μpred = A*μprev + b and Σpred = A*Σprev*A' + Q.
GeneralisedFilters.gradient_H — Method
gradient_H(∂μ_filt, ∂Σ_filt, cache, Σ_pred)Compute gradient of NLL w.r.t. observation matrix H.
Derived via chain rule through z = y - Hμ_pred - c, S = HΣ_pred*H' + R, and the update.
GeneralisedFilters.gradient_Q — Method
gradient_Q(∂Σ_pred)Compute gradient of NLL w.r.t. process noise covariance Q.
Implements equation 13 from Parellier et al.: ∂L/∂Q = ∂L/∂P_{n|n-1}
GeneralisedFilters.gradient_R — Method
gradient_R(∂μ_filt, ∂Σ_filt, cache)Compute gradient of NLL w.r.t. observation noise covariance R.
Implements equation 14 from Parellier et al.
GeneralisedFilters.gradient_b — Method
gradient_b(∂μ_pred)Compute gradient of NLL w.r.t. dynamics offset b.
Derived via chain rule through μpred = A*μprev + b.
GeneralisedFilters.gradient_c — Method
gradient_c(∂μ_filt, cache)Compute gradient of NLL w.r.t. observation offset c.
Derived via chain rule through z = y - H*μ_pred - c.
GeneralisedFilters.gradient_y — Method
gradient_y(∂μ_filt, cache)Compute gradient of NLL w.r.t. observation y.
Implements equation 12 from Parellier et al.: ∂L/∂y = K'*∂L/∂μ_filt + ∂l/∂y
GeneralisedFilters.initialise — Function
initialise([rng,] model, algo; kwargs...)Propose an initial state distribution from the prior.
Arguments
rng: Random number generator (optional, defaults todefault_rng())model: The state space model (or its prior component)algo: The filtering algorithm
Returns
The initial state distribution.
GeneralisedFilters.log_likelihoods — Method
log_likelihoods(state::DiscreteLikelihood)Extract the log backward likelihoods from a DiscreteLikelihood.
GeneralisedFilters.marginalise! — Method
marginalise!(state::ParticleDistribution)Compute the log-likelihood increment and normalize particle weights. This function:
- Computes LSE of current (post-observation) log-weights
- Calculates llincrement = LSEafter - ll_baseline
- Normalizes weights by subtracting LSE_after
- Resets ll_baseline to 0.0
The llbaseline field handles both standard particle filter and auxiliary particle filter cases through a single-scalar caching mechanism. For standard PF, llbaseline equals the LSE before adding observation weights. For APF with resampling, it includes first-stage correction terms computed during the APF resampling step.
GeneralisedFilters.natural_params — Method
natural_params(state::InformationLikelihood)Extract the natural parameters (λ, Ω) from an InformationLikelihood.
Returns a tuple (λ, Ω) where λ is the information vector and Ω is the information/precision matrix.
GeneralisedFilters.predict — Function
predict([rng,] dyn, algo, iter, state, observation; kwargs...)Propagate the filtered distribution forward in time using the dynamics model.
Arguments
rng: Random number generator (optional)dyn: The latent dynamics modelalgo: The filtering algorithmiter::Integer: The current time stepstate: The filtered state distribution at timeiter-1observation: The observation (may be used by some proposals)
Returns
The predicted state distribution at time iter.
GeneralisedFilters.smooth — Method
smooth([rng,] model, algo, observations; t_smooth=1, callback=nothing, kwargs...)Run a forward-backward smoothing pass to compute the smoothed distribution at time t_smooth.
This function first runs a forward filtering pass using the Kalman filter, caching all filtered distributions, then performs backward smoothing using the Rauch-Tung-Striebel equations from time T back to t_smooth.
Arguments
rng: Random number generator (optional, defaults todefault_rng())model: A linear Gaussian state space modelalgo: The smoothing algorithm (e.g.,KalmanSmoother())observations: Vector of observations y₁, ..., yₜ
Keyword Arguments
t_smooth=1: The time step at which to return the smoothed distributioncallback=nothing: Optional callback for the forward filtering pass
Returns
A tuple (smoothed, log_likelihood) where:
smoothed: The smoothed distribution p(xₜ | y₁:ₜ) at timet_smoothlog_likelihood: The total log-likelihood from the forward pass
Example
model = create_homogeneous_linear_gaussian_model(μ0, Σ0, A, b, Q, H, c, R)
smoothed, ll = smooth(model, KalmanSmoother(), observations)See also: backward_smooth, filter
GeneralisedFilters.smooth — Method
smooth(rng, model::DiscreteStateSpaceModel, algo::DiscreteSmoother, observations; t_smooth=1, kwargs...)Run forward-backward smoothing for discrete state space models.
Returns the smoothed distribution at time t_smooth and the log-likelihood.
GeneralisedFilters.srkf_predict — Method
srkf_predict(state, dyn_params)Perform the square-root Kalman filter predict step.
Given the filtered state and dynamics parameters (A, b, Q), compute the predicted state using QR factorization.
GeneralisedFilters.srkf_update — Method
srkf_update(state, obs_params, observation, jitter)Perform the square-root Kalman filter update step.
Given the predicted state, observation parameters (H, c, R), and observation y, compute the filtered state and log-likelihood using QR factorization.
GeneralisedFilters.step — Function
step([rng,] model, algo, iter, state, observation; kwargs...)Perform a combined predict and update step of the filtering algorithm.
Arguments
rng: Random number generator (optional)model: The state space modelalgo: The filtering algorithmiter::Integer: The current time stepstate: The current state distributionobservation: The observation at timeiter
Returns
A tuple (new_state, log_likelihood_increment).
GeneralisedFilters.two_filter_smooth — Function
two_filter_smooth(filtered, backward_lik)Combine a forward filtered distribution with a backward predictive likelihood to obtain the smoothed distribution at a given time step.
The smoothed distribution is the (normalized) product:
\[p(x_t | y_{1:T}) \propto p(x_t | y_{1:t}) \times p(y_{t+1:T} | x_t)\]
where:
filteredrepresents the forward filtered distribution $p(x_t | y_{1:t})$backward_likrepresents the backward predictive likelihood $p(y_{t+1:T} | x_t)$
Note that backward_lik is a likelihood (function of x), not a distribution over x.
Arguments
filtered: The filtered distribution $p(x_t | y_{1:t})$backward_lik: The backward predictive likelihood $p(y_{t+1:T} | x_t)$
Returns
The smoothed distribution $p(x_t | y_{1:T})$.
Implementations
- Linear Gaussian (
MvNormal,InformationLikelihood): Combines using the product of Gaussians formula in information form.
Relation to compute_marginal_predictive_likelihood
Both functions take the same inputs but compute different quantities:
two_filter_smoothreturns the distribution $p(x_t | y_{1:T})$compute_marginal_predictive_likelihoodreturns the scalar $p(y_{t+1:T} | y_{1:t})$
See also: backward_smooth, compute_marginal_predictive_likelihood
GeneralisedFilters.two_filter_smooth — Method
two_filter_smooth(filtered::AbstractVector, backward_lik::DiscreteLikelihood)Combine forward filtered distribution with backward likelihood to get smoothed distribution.
Returns the normalized smoothed distribution γt(i) ∝ πt(i) * β_t(i).
All computations are performed in log-space for numerical stability.
GeneralisedFilters.update — Function
update(obs, algo, iter, state, observation; kwargs...)Update the predicted distribution given an observation.
Arguments
obs: The observation process modelalgo: The filtering algorithmiter::Integer: The current time stepstate: The predicted state distributionobservation: The observation at timeiter
Returns
A tuple (filtered_state, log_likelihood_increment).
GeneralisedFilters.update_with_cache — Method
update_with_cache(obs, algo, iter, state, observation; kwargs...)Perform Kalman update and return cache for gradient computation.
This extends the standard update step to also return a KalmanGradientCache containing intermediate values needed for efficient backward gradient propagation.
Returns
A tuple (filtered_state, log_likelihood, cache) where:
filtered_state: The posterior state asMvNormallog_likelihood: The log-likelihood incrementcache: AKalmanGradientCachefor use in backward gradient computation
GeneralisedFilters.will_resample — Function
will_resample(resampler::AbstractResampler, state::ParticleDistribution)Determine whether a resampler will trigger resampling given the current particle state. For uncondition resamplers, always returns true. For conditional resamplers (e.g., ESSResampler), checks the resampling condition.