100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Summary

Summary Time Series Models E_EORM_TSM

Rating
-
Sold
-
Pages
6
Uploaded on
11-09-2025
Written in
2024/2025

This summary is the ultimate companion for your Time Series Models course. It gives you a clear and structured overview of all the material covered in the lectures and provides step-by-step explanations of every algorithm you need to know for the exam. With this summary in hand, you’ll save time, study more effectively, and walk into the exam confident and fully prepared.

Show more Read less
Institution
Module









Whoops! We can’t load your doc right now. Try again or contact support.

Connected book

Written for

Institution
Study
Module

Document information

Summarized whole book?
Yes
Uploaded on
September 11, 2025
Number of pages
6
Written in
2024/2025
Type
Summary

Subjects

Content preview

Time Series Models Summary
Vrije Universiteit Amsterdam
February/March 2025

Class Observation/ State Equation E[αt |Yt ] Var[αt |Yt ] E[α|Yn ] Var[α|Yn ] E[g(α)|Yn ] h(α) Mode
yt = Zt αt + dt + εt , εt ∼ N (0, Ht )
LGM KF(αt|t ) KF(Pt|t ) KS(α̂) KS(V ) KS(g(α̂)) Sim Sm KS(α̌ ≡ α̂)
αt+1 = Tt αt + ct + Rt ηt , ηt ∼ N (0, Qt )
yt = Zt (αt ) + εt , εt ∼ N (0, Ht )
nLGM EKF EKS - - -
αt+1 = Tt (αt ) + Rt ηt , ηt ∼ N (0, Qt )
yt ∼ P (yt |θt ), θt = dt + Zt αt
nLGM Particle Filtering Importance Sampling Mode Est
αt+1 = Tt αt + ct + Rt ηt , ηt ∼ N (0, Qt )


Simulation Smoothing
Goal: Draw path α̃ from p(α|Yn )
1. Use KFS to obtain α̂ = E[α|Yn ]
2. Use Unconditional Simulation to generate (α+ , y + ) ∼ p(α|Yn ) = p(Yn |α)p(α):
2a. Simulate errors: ηt+ ∼ N (0, Qt ), ε+
t ∼ N (0, Ht ) ∀t
2b. Simulate initial state: α1+ ∼ N (a1 , P1 )
+
2c. Recursively compute states: αt+1 = Tt αt+ + Rt ηt+
2c. Generate observations: yt+ = Zt αt+ + ε+
t

3. Use KFS to obtain α̂+ = E[α+ |Yn+ ]
4. Apply Lemma III to obtain: α̃ = α+ − α̂+ + α̂ α̃ ∼ p(α|Yn ) (Gaussian)
5. Repeat steps 1-4 M times (i = 1, . . . , M )
1
PM
6. Use Monte Carlo Estimator to estimate: E[h(α)|Yn ] : h(α)
d =
M t=1 h(α̃(i) )


Extended Kalman Filter / Smoother
Goal: Signal extraction
∂Zt (αt ) ∂Tt (αt )
1. Linearize the nonlinear model using the Taylor Expansion: Żt = ∂αt
′ Ṫt = ∂αt

αt =at αt =at|t

2. Apply KF for LGM: Zt = Żt , Tt = Ṫt , dt = Zt (at ) − Żt at , ct = Tt (at|t ) − Ṫt at|t
3. Apply KS for LGM


Mode Estimation
Goal: Estimate mode θ̂ = arg max p(θ|Yn )
1. Use Newton-Raphson Method: Aθ = −[p̈(Yn |θ)|θ=g ]−1 , z = g + Aṗ(Yn |θ)|θ=g

2. Run KFS to provide updated guess: θ̂ = d + Z α̂ = g + = (Ψ−1 + A−1 )−1 (Ψ−1 µ + A−1 z)

3. Check for convergence: if so, θ̌ = θ̂, otherwise, set g = g + and repeat steps 1-3


1

, Importance Sampling
Goal: Estimate x̄ = E[x(θ)|Yn ]
1. No analytical expression for p(θ|Yn ) so choose importance density g(θ|Yn ) using the SPDK Method
2. Use Simulation Smoothing for drawing paths
3. Apply appropriate correction through importance weights from Monte Carlo Estimator
4. Apply KFS for LGM


Particle Filtering
Goal: Estimate the conditional mean x̄t = E[x(αt )|Yt ]
(i)
Problem: Drawing α̃1:t from importance density ∀i, t = 1, . . . , n to estimate x̄t is too computationally
intensive
(i)
1. Fix previous selection of α̃1:t−1
(i) (i)
2. Use bootstrap filter p(αt |α̃t−1 ) to draw particles α̃t from importance density g(αt |Yt )
(i) (i)
3. Once yt is available, construct the importance weights: w̃t = p(yt |α̃t )
4. Normalize importance weights and obtain Monte Carlo Estimator
5. Resample


Multivariate LLM
σ2
1. Homogeneous if Ση = qΣε where Signal-to-Noise Ratio q = ση2
ε
Disturbance structure remains proportional across time series ⇒ reduces number of parameters to
estimate
2. Ση not Full Rank if Rank(Ση ) = r < p
Model contains only r underlying level components (common levels) ⇒ decompose Ση :

2a. Consider Cholesky Decomposition: Ση = AΣ∗η A ,
with A: p × r lower triangular unit matrix and Σ∗η : r × r positive definite diagonal matrix
2b. Model the r independent random walks: µt = Aµ∗t , ηt = Aηt∗ (µ∗t common levels)

yt = a + Aµ∗t + εt , εt ∼ N (0, Σε )

µ∗t+1 = µ∗t + ηt∗ ,
ηt ∼ N (0, Σ∗η )
   
0r I
For general values p and rank r < p, define: a = ∗ A = r∗
a A
with a∗ : (p − 1) × 1 matrix and A∗ : (p − r) × r matrix


Initialization
ση2
1. Stationary if |ϕ| < 1 ⇒ a1 = E(αt ), P1 = 1−ϕ2

2. Non-stationary if ϕ = 1 ⇒ a1 = 0, P1 = ∞ = 107 (Diffuse Prior Density)
Random Walk: αt+1 = αt + ηt ⇒ non-stationary


2
£5.83
Get access to the full document:

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached

Get to know the seller
Seller avatar
giuliavannigtevegt1

Get to know the seller

Seller avatar
giuliavannigtevegt1 Vrije Universiteit Amsterdam
Follow You need to be logged in order to follow users or courses
Sold
New on Stuvia
Member since
2 months
Number of followers
0
Documents
1
Last sold
-

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their exams and reviewed by others who've used these revision notes.

Didn't get what you expected? Choose another document

No problem! You can straightaway pick a different document that better suits what you're after.

Pay as you like, start learning straight away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and smashed it. It really can be that simple.”

Alisha Student

Frequently asked questions