SOLUTIONS MANUAL
, TABLE OF CONTENTS
CHAPTER 1 ……………………………………………………………………………………. 3
CHAPTER 2 ……………………………………………………………………………………. 31
CHAPTER 3 ……………………………………………………………………………………. 41
CHAPTER 4 ……………………………………………………………………………………. 48
CHAPTER 5 ……………………………………………………………………………………. 60
CHAPTER 6 ……………………………………………………………………………………. 67
CHAPTER 7 ……………………………………………………………………………………. 74
CHAPTER 8 ……………………………………………………………………………………. 81
CHAPTER 9 ……………………………………………………………………………………. 87
2
, CHAPTER 1
0.3 0.4 0.3
EXERCISE 1.1. For a Markov chain with a one-step transition probability matrix � 0.2 0.3 0.5 �
0.8 0.1 0.1
we compute:
(a) 𝑃𝑃(𝑋𝑋3 = 2 |𝑋𝑋0 = 1, 𝑋𝑋1 = 2, 𝑋𝑋2 = 3) = 𝑃𝑃(𝑋𝑋3 = 2 | 𝑋𝑋2 = 3) (by the Markov property)
= 𝑃𝑃32 = 0.1.
(b) 𝑃𝑃(𝑋𝑋4 = 3 |𝑋𝑋0 = 2, 𝑋𝑋3 = 1) = 𝑃𝑃(𝑋𝑋4 = 3 | 𝑋𝑋3 = 1) (by the Markov property)
= 𝑃𝑃13 = 0.3.
(c) 𝑃𝑃(𝑋𝑋0 = 1, 𝑋𝑋1 = 2, 𝑋𝑋2 = 3, 𝑋𝑋3 = 1) = 𝑃𝑃(𝑋𝑋3 = 1 | 𝑋𝑋0 = 1, 𝑋𝑋1 = 2, 𝑋𝑋2 = 3) 𝑃𝑃(𝑋𝑋2 = 3 |𝑋𝑋0 = 1,
𝑋𝑋1 = 2) 𝑃𝑃(𝑋𝑋1 = 2 | 𝑋𝑋0 = 1) 𝑃𝑃(𝑋𝑋0 = 1) (by conditioning)
= 𝑃𝑃(𝑋𝑋3 = 1 | 𝑋𝑋2 = 3) 𝑃𝑃(𝑋𝑋2 = 3 | 𝑋𝑋1 = 2) 𝑃𝑃(𝑋𝑋1 = 2 | 𝑋𝑋0 = 1) 𝑃𝑃(𝑋𝑋0 = 1) (by the Markov property)
= 𝑃𝑃31 𝑃𝑃23 𝑃𝑃12 𝑃𝑃(𝑋𝑋0 = 1) = (0.8)(0.5)(0.4)(1) = 0.16.
(d) We first compute the two-step transition probability matrix. We obtain
0.3 0.4 0.3 0.3 0.4 0.3 0.41 0.27 0.32
𝐏𝐏(2) = � 0.2 0.3 0.5 � � 0.2 0.3 0.5 � = � 0.52 0.22 0.26�.
Now we write 0.8 0.1 0.1 0.8 0.1 0.1 0.34 0.36 0.30
𝑃𝑃(𝑋𝑋0 = 1, 𝑋𝑋1 = 2, 𝑋𝑋3 = 3, 𝑋𝑋5 = 1) = 𝑃𝑃(𝑋𝑋5 = 1 | 𝑋𝑋0 = 1, 𝑋𝑋1 = 2, 𝑋𝑋3 = 3) 𝑃𝑃(𝑋𝑋3 = 3 |𝑋𝑋0 = 1,
𝑋𝑋1 = 2) 𝑃𝑃(𝑋𝑋1 = 2 | 𝑋𝑋0 = 1) 𝑃𝑃(𝑋𝑋0 = 1) (by conditioning)
= 𝑃𝑃(𝑋𝑋5 = 1 | 𝑋𝑋3 = 3) 𝑃𝑃(𝑋𝑋3 = 3 | 𝑋𝑋1 = 2) 𝑃𝑃(𝑋𝑋1 = 2 | 𝑋𝑋0 = 1) 𝑃𝑃(𝑋𝑋0 = 1) (by the Markov property)
(2) (2) 𝑃𝑃(𝑋𝑋 = 1) = (0.34)(0.26)(0.4)(1) = 0.03536.
𝑃𝑃
= 𝑃𝑃31 𝑃𝑃23 12 0
EXERCISE 1.2. (a) We plot a diagram of the Markov chain.
#specifying transition probability matrix
tm<- matrix(c(1, 0, 0, 0, 0, 0.5, 0, 0, 0, 0.5, 0.2, 0, 0, 0, 0.8,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0), nrow=5, ncol=5, byrow=TRUE)
#transposing transition probability matrix
tm.tr<- t(tm)
#plotting diagram
library(diagram)
plotmat(tm.tr, arr.length=0.25, arr.width=0.1, box.col="light blue",
box.lwd=1, box.prop=0.5, box.size=0.12, box.type="circle", cex.txt=0.8,
lwd=1, self.cex=0.3, self.shiftx=0.01, self.shifty=0.09)
3
, State a2 ais areflective. aThe achain aleaves athat astate ain aone astep. aTherefore, ait aforms aa aseparate
atransient a class athat ahas aan ainfinite aperiod.
Finally, astates a3, a4, aand a5 acommunicate aand athus abelong ato athe asame aclass. aThe achain acan
areturn ato a either astate ain athis aclass ain a3, a6, a9, aetc. asteps, athus athe aperiod ais aequal ato a3.
aSince athere ais aa apositive a probability ato aleave athis aclass, ait ais atransient.
The aR aoutput asupports athese afindings.
#creating aMarkov achain aobject
alibrary(markovchain)
mc<- anew("markovchain", atransitionMatrix=tm,states=c("1", a"2", a"3", a"4", a"5"))
#computing aMarkov achain acharacteristics
arecurrentClasses(mc)
"1"
transientClasses(mc)
"2"
"3" a"4" a"5"
absorbingStates(mc)
"1"
(c) Below awe asimulate athree atrajectories aof athe achain athat astart aat aa arandomly achosen astate.
4