SOLUTIONS MANUAL
, TABLE OF CONTENTS
CHAPTER 1 ……………………………………………………………………………………. 3
CHAPTER 2 ……………………………………………………………………………………. 31
CHAPTER 3 ……………………………………………………………………………………. 41
CHAPTER 4 ……………………………………………………………………………………. 48
CHAPTER 5 ……………………………………………………………………………………. 60
CHAPTER 6 ……………………………………………………………………………………. 67
CHAPTER 7 ……………………………………………………………………………………. 74
CHAPTER 8 ……………………………………………………………………………………. 81
CHAPTER 9 ……………………………………………………………………………………. 87
2
, CHAPTER 1
0.3 0.4 0.3
EXERCISE 1.1. For a Markov chain with a one-step transition probability matrix � 0.2 0.3 0.5 �
0.8 0.1 0.1
we compute:
(a) 𝑃𝑃(𝑋𝑋3 = 2 |𝑋𝑋0 = 1, 𝑋𝑋1 = 2, 𝑋𝑋2 = 3) = 𝑃𝑃(𝑋𝑋3 = 2 | 𝑋𝑋2 = 3) (by the Markov property)
= 𝑃𝑃32 = 0.1.
(b) 𝑃𝑃(𝑋𝑋4 = 3 |𝑋𝑋0 = 2, 𝑋𝑋3 = 1) = 𝑃𝑃(𝑋𝑋4 = 3 | 𝑋𝑋3 = 1) (by the Markov property)
= 𝑃𝑃13 = 0.3.
(c) 𝑃𝑃(𝑋𝑋0 = 1, 𝑋𝑋1 = 2, 𝑋𝑋2 = 3, 𝑋𝑋3 = 1) = 𝑃𝑃(𝑋𝑋3 = 1 | 𝑋𝑋0 = 1, 𝑋𝑋1 = 2, 𝑋𝑋2 = 3) 𝑃𝑃(𝑋𝑋2 = 3 |𝑋𝑋0 = 1,
𝑋𝑋1 = 2) 𝑃𝑃(𝑋𝑋1 = 2 | 𝑋𝑋0 = 1) 𝑃𝑃(𝑋𝑋0 = 1) (by conditioning)
= 𝑃𝑃(𝑋𝑋3 = 1 | 𝑋𝑋2 = 3) 𝑃𝑃(𝑋𝑋2 = 3 | 𝑋𝑋1 = 2) 𝑃𝑃(𝑋𝑋1 = 2 | 𝑋𝑋0 = 1) 𝑃𝑃(𝑋𝑋0 = 1) (by the Markov property)
= 𝑃𝑃31 𝑃𝑃23 𝑃𝑃12 𝑃𝑃(𝑋𝑋0 = 1) = (0.8)(0.5)(0.4)(1) = 0.16.
(d) We first compute the two-step transition probability matrix. We obtain
0.3 0.4 0.3 0.3 0.4 0.3 0.41 0.27 0.32
𝐏𝐏(2) = � 0.2 0.3 0.5 � � 0.2 0.3 0.5 � = � 0.52 0.22 0.26�.
Now we write 0.8 0.1 0.1 0.8 0.1 0.1 0.34 0.36 0.30
𝑃𝑃(𝑋𝑋0 = 1, 𝑋𝑋1 = 2, 𝑋𝑋3 = 3, 𝑋𝑋5 = 1) = 𝑃𝑃(𝑋𝑋5 = 1 | 𝑋𝑋0 = 1, 𝑋𝑋1 = 2, 𝑋𝑋3 = 3) 𝑃𝑃(𝑋𝑋3 = 3 |𝑋𝑋0 = 1,
𝑋𝑋1 = 2) 𝑃𝑃(𝑋𝑋1 = 2 | 𝑋𝑋0 = 1) 𝑃𝑃(𝑋𝑋0 = 1) (by conditioning)
= 𝑃𝑃(𝑋𝑋5 = 1 | 𝑋𝑋3 = 3) 𝑃𝑃(𝑋𝑋3 = 3 | 𝑋𝑋1 = 2) 𝑃𝑃(𝑋𝑋1 = 2 | 𝑋𝑋0 = 1) 𝑃𝑃(𝑋𝑋0 = 1) (by the Markov property)
(2) (2) 𝑃𝑃(𝑋𝑋 = 1) = (0.34)(0.26)(0.4)(1) = 0.03536.
𝑃𝑃
= 𝑃𝑃31 𝑃𝑃23 12 0
EXERCISE 1.2. (a) We plot a diagram of the Markov chain.
#specifying transition probability matrix
tm<- matrix(c(1, 0, 0, 0, 0, 0.5, 0, 0, 0, 0.5, 0.2, 0, 0, 0, 0.8,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0), nrow=5, ncol=5, byrow=TRUE)
#transposing transition probability matrix
tm.tr<- t(tm)
#plotting diagram
library(diagram)
plotmat(tm.tr, arr.length=0.25, arr.width=0.1, box.col="light blue",
box.lwd=1, box.prop=0.5, box.size=0.12, box.type="circle", cex.txt=0.8,
lwd=1, self.cex=0.3, self.shiftx=0.01, self.shifty=0.09)
3
, State f2 fis freflective. fThe fchain fleaves fthat fstate fin fone fstep. fTherefore, fit fforms fa fseparate
ftransient f class fthat fhas fan finfinite fperiod.
Finally, fstates f3, f4, fand f5 fcommunicate fand fthus fbelong fto fthe fsame fclass. fThe fchain fcan
freturn fto f either fstate fin fthis fclass fin f3, f6, f9, fetc. fsteps, fthus fthe fperiod fis fequal fto f3. fSince
fthere fis fa fpositive f probability fto fleave fthis fclass, fit fis ftransient.
The fR foutput fsupports fthese ffindings.
#creating fMarkov fchain fobject
flibrary(markovchain)
mc<- fnew("markovchain", ftransitionMatrix=tm,states=c("1", f"2", f"3", f"4", f"5"))
#computing fMarkov fchain fcharacteristics
frecurrentClasses(mc)
"1"
transientClasses(mc)
"2"
"3" f"4" f"5"
absorbingStates(mc)
"1"
(c) Below fwe fsimulate fthree ftrajectories fof fthe fchain fthat fstart fat fa frandomly fchosen fstate.
4