Stappenplan Markov Processen (per type vraag)
Onderdeel 1: Discrete-time Markov Chain
Q1: Check if a process is Markovian or not (if not: redefine states)
Smartly think about how to define a Markov Process (and it’s states)
Q2: Calculate n-step probability
First: check if i is given (if not: unconditional, use law of total probability (ex. 2.1))
If P^(n) not given: think about potential paths for a fixed j (p.33 handout)
If P^(n) given: use it to calculate the probability (ex. 2.1/2.2)
Q3: Distinguish recurrent and transient classes
Look at which states are communicating: those form a class,
Recurrent: if there is only one class and finite states (by Markov property!)
will never leave the communicating states again
positive, only maybe null when infinite states
f(i) = 1
Transient: often has a class that has path leading to a recurrent class (p.44)
f(i) < 1
(Irreducible: if there is only one class)
(Ergodic: if it’s positive recurrent and aperiodic)
Q4: Argue that a state is (a)periodic
Easiest way: find a state which has a direct path back to itself
(automatically aperiodic), any communicating states are also aperiodic
From the definition: for aperiodicity: find two paths back which have 1 as their gcd
for the amount of steps. (p.45)
Q5: Write down the steady-state equations πi (and solve them)
(Must be irreducible ergodic MC)
First: Use πi = (P^T)(π0 + π1 + …)
Don’t forget: (π0 + π1 + …) = 1 (ex. 3.1)
Sometimes partly asked: (where lim(m→∞) P(Xm = i) = πi is used in the equation)
(oefentt)
Q6: Calculate long-run expected reward/cost r(j)
Compute average reward per time unit (mean): Σr(j)*πj (p.48)
Note: r(j) can be either deterministic or Random
if Random: ΣE(R|being in state j)*πj
, s(ij) = expected number of time periods MC is in state j, given it starts in i (p.54)
f(ij) = probability MC ever enters state j, given it starts in i (p.56)
note that i, j ∈ T
m(iR) = the mean time to enter the (only) recurrent class R, given it starts in i (p.56)
f(iR1) = the probability MC ever enters recurrent class R1, given it starts in i (p.57)
note that i ∈ T
Q7: Calculate these quantities
First: try to figure from real-life question which one it’s about:
Calculating: s(ij), when the expected number of times a transient state j is asked,
write P(T) as the matrix with only transient states (ex. 4.1)
f(ij), probability of entering a transient state j before a Rec. state (e.g.)
(with given i), sometimes make another transient state
absorbing and then write f(ij)’s (ex. 4.4a)
m(iR), when mean time to enter a recurrent class (with given i),
can sometimes be a class that you made absorbing,
then add all states of R in one column/row (ex. 4.4b)
f(iR1), when the probability of entering a specific recurrent class R1 is
asked (with given i), only look at transient classes of P other
than looking at R1 (ex. 4.2d)
Sometimes for m(iR) and f(iR1): define the recurrent classes smartly, collapsing all
recurrent states into one (of a class) may simplify the matrix.
Or for s(ij) and f(ij): (e.g.) design states to be absorbing to use these quantities
Onderdeel 2: Poisson process
Q8: Calculate (conditional) probabilities regarding a counting event
First: try to make use of the independent and stationary increments as much as
possible, try to rewrite so you can make use of independence or a standard
form of which you know the mean/variance. Try to transform the covariance
into properties you know (i.e. independence or known variance). (oefentt)
For conditional probability: try to transform into disjoint intervals/variables, so the
condition doesn’t play a role (independent/stationary increment) (ex. 6.5b)
Q9: Calculate (conditional) expectation regarding an arrival time
First: try to make use of the independent and stationary increments as much as
possible
For conditional expectation: try to transform into disjoint intervals/variables, so
condition doesn’t play a role (ex. 6.3)
(ex. 7.4)
Note: you can convert counting problem into an arrival time problem and
vice versa: N(t) ≥ n ⇔ Sn ≤ t (p.71)
Onderdeel 1: Discrete-time Markov Chain
Q1: Check if a process is Markovian or not (if not: redefine states)
Smartly think about how to define a Markov Process (and it’s states)
Q2: Calculate n-step probability
First: check if i is given (if not: unconditional, use law of total probability (ex. 2.1))
If P^(n) not given: think about potential paths for a fixed j (p.33 handout)
If P^(n) given: use it to calculate the probability (ex. 2.1/2.2)
Q3: Distinguish recurrent and transient classes
Look at which states are communicating: those form a class,
Recurrent: if there is only one class and finite states (by Markov property!)
will never leave the communicating states again
positive, only maybe null when infinite states
f(i) = 1
Transient: often has a class that has path leading to a recurrent class (p.44)
f(i) < 1
(Irreducible: if there is only one class)
(Ergodic: if it’s positive recurrent and aperiodic)
Q4: Argue that a state is (a)periodic
Easiest way: find a state which has a direct path back to itself
(automatically aperiodic), any communicating states are also aperiodic
From the definition: for aperiodicity: find two paths back which have 1 as their gcd
for the amount of steps. (p.45)
Q5: Write down the steady-state equations πi (and solve them)
(Must be irreducible ergodic MC)
First: Use πi = (P^T)(π0 + π1 + …)
Don’t forget: (π0 + π1 + …) = 1 (ex. 3.1)
Sometimes partly asked: (where lim(m→∞) P(Xm = i) = πi is used in the equation)
(oefentt)
Q6: Calculate long-run expected reward/cost r(j)
Compute average reward per time unit (mean): Σr(j)*πj (p.48)
Note: r(j) can be either deterministic or Random
if Random: ΣE(R|being in state j)*πj
, s(ij) = expected number of time periods MC is in state j, given it starts in i (p.54)
f(ij) = probability MC ever enters state j, given it starts in i (p.56)
note that i, j ∈ T
m(iR) = the mean time to enter the (only) recurrent class R, given it starts in i (p.56)
f(iR1) = the probability MC ever enters recurrent class R1, given it starts in i (p.57)
note that i ∈ T
Q7: Calculate these quantities
First: try to figure from real-life question which one it’s about:
Calculating: s(ij), when the expected number of times a transient state j is asked,
write P(T) as the matrix with only transient states (ex. 4.1)
f(ij), probability of entering a transient state j before a Rec. state (e.g.)
(with given i), sometimes make another transient state
absorbing and then write f(ij)’s (ex. 4.4a)
m(iR), when mean time to enter a recurrent class (with given i),
can sometimes be a class that you made absorbing,
then add all states of R in one column/row (ex. 4.4b)
f(iR1), when the probability of entering a specific recurrent class R1 is
asked (with given i), only look at transient classes of P other
than looking at R1 (ex. 4.2d)
Sometimes for m(iR) and f(iR1): define the recurrent classes smartly, collapsing all
recurrent states into one (of a class) may simplify the matrix.
Or for s(ij) and f(ij): (e.g.) design states to be absorbing to use these quantities
Onderdeel 2: Poisson process
Q8: Calculate (conditional) probabilities regarding a counting event
First: try to make use of the independent and stationary increments as much as
possible, try to rewrite so you can make use of independence or a standard
form of which you know the mean/variance. Try to transform the covariance
into properties you know (i.e. independence or known variance). (oefentt)
For conditional probability: try to transform into disjoint intervals/variables, so the
condition doesn’t play a role (independent/stationary increment) (ex. 6.5b)
Q9: Calculate (conditional) expectation regarding an arrival time
First: try to make use of the independent and stationary increments as much as
possible
For conditional expectation: try to transform into disjoint intervals/variables, so
condition doesn’t play a role (ex. 6.3)
(ex. 7.4)
Note: you can convert counting problem into an arrival time problem and
vice versa: N(t) ≥ n ⇔ Sn ≤ t (p.71)