,Solution Manual for Selected Problems
Monte Carlo Statistical Methods, 2nd Edition
Christian P. Robert and George Casella
c 2007 Springer Science+Business Media
This manual has been compiled by Roberto Casarin, Université Dauphine and
Universitá di Brescia, partly from his notes and partly from contributions from
Cyrille Joutard, CREST, and Arafat Tayeb, Université Dauphine, under the su-
pervision of the authors. Later additions were made by Christian Robert
— Second Version, June 27, 2007
Chapter 1
Problem 1.2
Let X ∼ N (θ, σ 2 ) and Y ∼ N (µ, ρ2 ). The event {Z > z} is a.s. equivalent to
{X > z} and {Y > z}. From the independence between X et Y , it follows
P (Z > z) = P (X > z)P (Y > z)
Let G be the c.d.f. of z, then
1 − G(z) = [1 − P (X < z)] [1 − P (Y < z)]
z−θ z−µ
= [1 − Φ 1−Φ
σ ρ
By taking the derivative and rearranging we obtain
z−θ −1 z−µ z−µ −1 z−θ
g(z) = 1 − Φ ρ ϕ + 1−Φ σ ϕ
σ ρ ρ σ
Let X ∼ W(α, β) and Z = X ∧ ω, then
Z ∞
α
P (X > ω) = αβxα−1 e−βx dx
ω
,2 Solution Manual
and Z ∞
α
P (Z = ω) = P (> ω) = αβxα−1 e−βx dx
ω
We conclude that the p.d.f. of Z is
Z ∞
α α
f (z) = αβz α−1 e−βz Iz6ω + αβxα−1 e−βx dx δω (z)
ω
Problem 1.4
In order to find an explicit form of the integral
Z ∞
α
αβxα−1 e−βx dx,
ω
we use the change of variable y = xα . We have dy = αxα−1 dx and the integral
becomes Z ∞ Z ∞
α α
αβxα−1 e−βx dx = βe−βy dy = e−βω .
ω ωα
Problem 1.6
Let X1 , ..., Xn be an iid sample from the mixture distribution
f (x) = p1 f1 (x) + ... + pk fk (x).
Suppose that the moments up to the order k of every fj , j = 1, ..., k are finite
and let Z
mi,j = E(X ) = xi fj (x)dx,
i
where X ∼ fj . An usual approximation of the moments of f is
n
1X i
µi = X .
n j=1 j
Thus, we have the approximation
k
X
µi = pj mi,j ,
j=1
for i = 1, ..., k. This is a linear system that gives (p1 , ..., pk ) if the matrix
M = [mi,j ] is invertible.
, Monte Carlo Statistical Methods 3
Problem 1.7
The density f of the vector Yn is
n n 2 !
1 1X yi − µ
f (yn , µ, σ) = √ exp − , ∀yn ∈ Rn , ∀(µ, σ 2 ) ∈ R×R∗+
σ 2π 2 i=1 σ
This function is strictly positive and the first and second order partial deriva-
tives with respect to µ and σ exist and are positive. The same hypotheses are
satisfied for the log-likelihood function
n 2
√ 1X yi − µ
log(L(µ, σ, yn )) = −n log 2π − n log σ −
2 i=1 σ
thus we can find the ML estimator of µ and σ 2 . The gradient of the log-
likelihood is
( ( Pn
∂ log(L(µ,σ,yn )) 1
2 (y − µ)
Pn i
∇ log (L) = ∂ log(L(µ,σ,yn )) = σ n i=1
∂µ
(y −µ)2
∂σ
− σ + i=1σ3i
if we equate the gradient to the null vector, ∇ log (L) = 0 and solve the
resulting system in µ and σ, we find
n
1X
µ̂ = yi = ȳ ,
n i=1
n
1X
σ̂ 2 = (yi − ȳ)2 = s2 .
n i=1
Problem 1.8
Let X be a r.v. following a mixture of the two exponential distributions Exp(1)
and Exp(2). The density is
f (x) = πe−x + 2(1 − π)e−2x .
The s-th non-central moment of the mixture is