Vrije Universiteit Amsterdam
February/March 2025
Class Observation/ State Equation E[αt |Yt ] Var[αt |Yt ] E[α|Yn ] Var[α|Yn ] E[g(α)|Yn ] h(α) Mode
yt = Zt αt + dt + εt , εt ∼ N (0, Ht )
LGM KF(αt|t ) KF(Pt|t ) KS(α̂) KS(V ) KS(g(α̂)) Sim Sm KS(α̌ ≡ α̂)
αt+1 = Tt αt + ct + Rt ηt , ηt ∼ N (0, Qt )
yt = Zt (αt ) + εt , εt ∼ N (0, Ht )
nLGM EKF EKS - - -
αt+1 = Tt (αt ) + Rt ηt , ηt ∼ N (0, Qt )
yt ∼ P (yt |θt ), θt = dt + Zt αt
nLGM Particle Filtering Importance Sampling Mode Est
αt+1 = Tt αt + ct + Rt ηt , ηt ∼ N (0, Qt )
Simulation Smoothing
Goal: Draw path α̃ from p(α|Yn )
1. Use KFS to obtain α̂ = E[α|Yn ]
2. Use Unconditional Simulation to generate (α+ , y + ) ∼ p(α|Yn ) = p(Yn |α)p(α):
2a. Simulate errors: ηt+ ∼ N (0, Qt ), ε+
t ∼ N (0, Ht ) ∀t
2b. Simulate initial state: α1+ ∼ N (a1 , P1 )
+
2c. Recursively compute states: αt+1 = Tt αt+ + Rt ηt+
2c. Generate observations: yt+ = Zt αt+ + ε+
t
3. Use KFS to obtain α̂+ = E[α+ |Yn+ ]
4. Apply Lemma III to obtain: α̃ = α+ − α̂+ + α̂ α̃ ∼ p(α|Yn ) (Gaussian)
5. Repeat steps 1-4 M times (i = 1, . . . , M )
1
PM
6. Use Monte Carlo Estimator to estimate: E[h(α)|Yn ] : h(α)
d =
M t=1 h(α̃(i) )
Extended Kalman Filter / Smoother
Goal: Signal extraction
∂Zt (αt ) ∂Tt (αt )
1. Linearize the nonlinear model using the Taylor Expansion: Żt = ∂αt
′ Ṫt = ∂αt
′
αt =at αt =at|t
2. Apply KF for LGM: Zt = Żt , Tt = Ṫt , dt = Zt (at ) − Żt at , ct = Tt (at|t ) − Ṫt at|t
3. Apply KS for LGM
Mode Estimation
Goal: Estimate mode θ̂ = arg max p(θ|Yn )
1. Use Newton-Raphson Method: Aθ = −[p̈(Yn |θ)|θ=g ]−1 , z = g + Aṗ(Yn |θ)|θ=g
2. Run KFS to provide updated guess: θ̂ = d + Z α̂ = g + = (Ψ−1 + A−1 )−1 (Ψ−1 µ + A−1 z)
3. Check for convergence: if so, θ̌ = θ̂, otherwise, set g = g + and repeat steps 1-3
1
, Importance Sampling
Goal: Estimate x̄ = E[x(θ)|Yn ]
1. No analytical expression for p(θ|Yn ) so choose importance density g(θ|Yn ) using the SPDK Method
2. Use Simulation Smoothing for drawing paths
3. Apply appropriate correction through importance weights from Monte Carlo Estimator
4. Apply KFS for LGM
Particle Filtering
Goal: Estimate the conditional mean x̄t = E[x(αt )|Yt ]
(i)
Problem: Drawing α̃1:t from importance density ∀i, t = 1, . . . , n to estimate x̄t is too computationally
intensive
(i)
1. Fix previous selection of α̃1:t−1
(i) (i)
2. Use bootstrap filter p(αt |α̃t−1 ) to draw particles α̃t from importance density g(αt |Yt )
(i) (i)
3. Once yt is available, construct the importance weights: w̃t = p(yt |α̃t )
4. Normalize importance weights and obtain Monte Carlo Estimator
5. Resample
Multivariate LLM
σ2
1. Homogeneous if Ση = qΣε where Signal-to-Noise Ratio q = ση2
ε
Disturbance structure remains proportional across time series ⇒ reduces number of parameters to
estimate
2. Ση not Full Rank if Rank(Ση ) = r < p
Model contains only r underlying level components (common levels) ⇒ decompose Ση :
′
2a. Consider Cholesky Decomposition: Ση = AΣ∗η A ,
with A: p × r lower triangular unit matrix and Σ∗η : r × r positive definite diagonal matrix
2b. Model the r independent random walks: µt = Aµ∗t , ηt = Aηt∗ (µ∗t common levels)
yt = a + Aµ∗t + εt , εt ∼ N (0, Σε )
µ∗t+1 = µ∗t + ηt∗ ,
ηt ∼ N (0, Σ∗η )
0r I
For general values p and rank r < p, define: a = ∗ A = r∗
a A
with a∗ : (p − 1) × 1 matrix and A∗ : (p − r) × r matrix
Initialization
ση2
1. Stationary if |ϕ| < 1 ⇒ a1 = E(αt ), P1 = 1−ϕ2
2. Non-stationary if ϕ = 1 ⇒ a1 = 0, P1 = ∞ = 107 (Diffuse Prior Density)
Random Walk: αt+1 = αt + ηt ⇒ non-stationary
2