SOLUTIONS
,Solutions Manual
Summary: In this chapter we present complete solution to the exercises set in the
text.
Chapter 1
1. Problem 1. As defined in the problem, A−B is composed of the elements in A that are
not in B. Thus, the items to be noted are true. Making use of the properties of the
probability function, we find that:
P (A ∪ B) = P (A) + P (B − A)
and that:
P (B) = P (B − A) + P (A ∩ B).
Combining the two results, we find that:
P (A ∪ B) = P (A) + P (B) − P (A ∩ B).
2. Problem 2.
(a) It is clear that fX(α) ≥ 0. Thus, we need only check that the integral of
the PDF is equal to 1. We find that:
Z∞ Z ∞ e−|α| dα
fX (α) dα = 0.5 −∞
−∞ Z0 α
e dα + Z ∞
= 0.5 e−α dα
−∞ 0
= 0.5(1 + 1)
= 1.
Thus fX(α) is indeed a PDF.
(b) Because fX(α) is even, its expected value must be zero. Addition- ally,
because α2fX(α) is an even function of α, we find that:
Z∞ Z∞
α2fX (α) dα = 2 α2fX (α) dα
−∞ 0
@@
Se
Siesimiciicso
isla
oltio
atinon
sm 1
,
,2 Random Signals and Noise: A Mathematical Introduction
Z ∞
= α2e−α dα
0
by parts
Z∞
= 2 −α ∞
(−α e | +02 αe−α dα
Z0 ∞
by parts
− ∞ e−α dα
= 2(−αe α|0 ) + 2
0
= 2.
Thus, E(X2) = 2. As E(X) = 0, we find that σ2 = 2 and σX =
√ X
2.
3. Problem 3.
The expected value of the random variable is:
Z∞
1 2 2
E(X) = √ αe −(α−µ) /(2σ )
dα
2πσ
Z ∞ −∞ −u 2/2
u=(α−µ)/σ 1
= √ (σu + µ)e dα.
2π −∞
2
Clearly the piece of the integral associated with ue−u /2 is zero. The
remaining integral is just µ times the integral of the PDF of the standard normal RV—
and must be equal to µ as advertised.
2
Now let us consider the variance of the RV—let Z ∞ us consider E((X −µ) ). We find that:
2 2
1
E((X − µ)2) = √ 2 −(α−µ) /(2σ )
2πσ Z−∞ (α − µ) e dα
∞
u=(α−µ)/σ 1 2
= σ2 √ u2e−u /2 dα.
2π −∞
As this is just σ2 times the variance of a standard normal RV, we find that the
variance here is σ2.
4. Problem 4.
(a) Clearly (β − α)2 ≥ 0. Expanding this and rearranging it a bit we find that:
β 2 ≥ 2αβ − α2.
(b) Because β 2 ≥ 2αβ − α2 and e−a is a decreasing function of a, the inequality
must hold.
(c)
Z ∞ Z ∞
2 2
−(2αβ−α )/2
e−β /2
dβ ≤ e dβ
α α
@@
SeSiesm iciis
ism ciosloaltaiotinon
,Solutions Manual 3
The PDF Function
0
1/2
2
−2 2
−2
1/2
0
FIGURE 1.1
The PDF of Problem 6.
Z ∞ −2αβ/2
=e α2/2
e dβ
α
∞
2 e−αβ
α /2
=e
−α2 α
2 e−α
= eα /2
2
α
−α
= e
α
The final step is to plug this into the formula given at the beginning of the problem
statement.
5. Problem 5.
If two random variables are independent, then their joint PDF must be the product of
their marginal PDFs. That is, fXY (α, β) = fX (α)fY (β). The regions in which the joint
PDF are non-zero must be the intersection of regions in which both marginal PDFs are
non-zero. As these regions are strips in the α, β plains, their intersections are
rectangles in that plain. (Note that for our purposes an infinite region all of whose
borders are right angles to one another is also considered a rectangle.)
6. Problem 6.
Consider the PDF given in Figure 1.1. It is the union of two rectangu- lar regions.
Thus, it is at least possible that the two random variables are independent. In order
for the random variables to actually be in- dependent it is necessary that fXY (α,
β) = fX(α)fY (β) at all points.
@@
SeSiesm iciis
ism ciosloaltaiotinon
,4 Random Signals and Noise: A Mathematical Introduction
Let us consider the point (−2.5, 2.5). It is clear that fX (−2.5) = 0.5 and fY (2.5) =
0.5. Thus if the random variable are independent, fXY (−2.5, 2.5) = 0.5 · 0.5.
However, the actual value of the PDF at that point is 0. Thus, the random variables
are not independent.
Are the random variables correlated? Let us consider E(XY ). Because the
probability is only non-zero when either both α and β are positive or both are
negative, it is clear that:
Z Z
αβfXY (α, β) dαdβ > 0.
It is also easy to see that the marginal PDFs of X and Y are even func- tions. Thus,
E(X) = E(Y ) = 0. We find that E(XY ) 6= E(X)E(Y ) and the random variables
are correlated.
7. Problem 7. Making use of the definition of the fact that the Xi are zero-mean, the
fact that the Xi have a common variance, and the fact that the Xi are mutually
uncorrelated, we find that:
E(Q) = E(R) = E(S) = 0
2 2 2 2
and that:
σ2 2 2
Q = σR = σS = E((X 3 + X 4) ) = E(X 3 + 2X 3X 4 + X 4 ) = 2σX.
Now let us calculate several important expectations. We find that:
E(QR) = E((X1 + X2)(X2 + X3))
= E(X1X2 + X1X3 + X 2 + X22X 3)
= 0 + 0 + σ 2 +X0
= σ 2X,
and that:
E(QS) = E((X1 + X2)(X3 + X4))
= E(X1X3 + X1X4 + X2X3 + X 2X 4)
= 0+0+0+0
= 0,
and that:
E(RS) = E((X2 + X3)(X3 + X4))
= E(X2X3 + X2X4 + X 2 + X33X 4)
= 0 + 0 + σ 2 +X0
= σ2 .
X
@@
SeSiesm iciis
ism ciosloaltaiotinon
,Solutions Manual 5
Making use of the preceding calculations and the definition of the cor- relation
coefficient we find that:
ρQR = 1/2, ρQS = 0, ρRS = 1/2.
These results are quite reasonable. If the correlation coefficient really measures the
degree of “sameness,” then as Q and R are “half the same” and Q and S have no
overlap their correlation coefficients ought to be 1/2 and zero respectively. Similarly,
as R and S overlap in half their constituent parts the degree of correlation ought to
be 1/2.
8. Problem 8.
(a) With fX(α) a pulse of unit height that stretches from −1/2 to 1/2, we find that:
Z 1/2
ϕX(t) = −1/2 ejαt dα
Z 1/2
Z 1/2
= 1 −1/2
cos(αt)dα + j −1/2 sin(αt)dα
= (sin(t/2) − sin(−t/2)) + 0
t 2 sin(t/2)
= .
t
(How can this argument be made more precise (correct) when t = 0?)
(b) We must calculate ϕ′ ′′
X (t)|t=0 and ϕ (t)|t=0 X . The easiest way to do
do this is to calculate the Taylor series associated with ϕX (t). We
find that:
2(t/2 − (t/2)3/3! + · · ·)
ϕX(t) = t
= 1 − t2/24 + · · ·
= ϕX (0) + ϕ′ X (0)t + ϕ′ (0)tX2/2 + · · · .
By inspection, we find that ϕ′ (0) = 0 and ϕ′′ (0) = −1/12. We
find that: X X
jE(X) = 0
−E(X 2 ) = −1/12.
2
Thus E(X) = 0, and E(X ) = 1/12.
9. Problem 9.
Making use of the definition of the characteristic function, we find that:
ϕX(0) = E(ejX0) = E(1) = 1.
@@
SeSiesm iciis
ism ciosloaltaiotinon
,6 Random Signals and Noise: A Mathematical Introduction
10. Problem 10. Simply note that:
N! N! N!
CN N
i = , and CN−i = = .
(N − i)!i! (N − (N − i))!(N − i)! i!(N − i)!
11. Problem 11. The marginal PMF is defined as:
X
pX (α) ≡ p XY (α, β).
β
Since the joint PMF is non-negative so is the marginal PMF. Let us consider
the sum of the marginal PMF. We find that:
X XX
pX(α) = p XY (α, β) = 1.
α α β
Thus, the marginal PMF inherits its legitimacy from the joint PMF.
12. Problem 12.
X
E(g1(X)g2(Y )) = g 1(α)g 2(β)p XY (α,β)
α,β
independence
X g (α)g (β)p (α)p (β)
= 1 2 X Y
α,β
X X
= g1(α)pX(α) g2(β)pY (β)
α β
= E(g1(X))E(g2(Y )).
13. Problem 13.
(a) We will consider pX (α). The calculation for pY (β) is identical.
X
pX(−1) = p XY (−1, β) = 1/4 + 1/4 = 1/2.
β
Similarly, we find that pX (1) = 1/2. Additionally pY (−1) =
pY (1) = 1/2.
(b) In this case we have a simple, finite, set of calculations. Let us consider pXY
(−1, −1). We know that this is 1/4. Let us compare this with pX (−1)pY (−1).
We find that this too is 1/4. Checking the other three possible values, we find
that they too correspond.
Thus the RVs are independent.
(c)
ϕX (t) = E(ejXt ) = ej(−1)t(1/2) + ej1t(1/2) = cos(t).
Similarly, ϕY (t) = cos(t).
@@
SeSiesm iciis
ism ciosloaltaiotinon
,Solutions Manual 7
(d) As the RVs are independent:
ϕX+Y (t) = ϕX(t)ϕY (t) = cos2(t).
(e) The expression given is the definition of:
ϕ Z(t) = E(ejZt).
(f) The possible values of X + Y are −2, 0, and 2. We find that the characteristic
function of X + Y is:
ϕX+Y (t) = cos2(t) = (cos(2t) + 1)/2 = ej(−2)t/4 + e j0t/2 + ej2t/4.
Thus, pZ (−2) = pZ (2) = 1/4 and pZ (0) = 1/2.
14. Problem 14.
(a) The relative frequency of the measurement “1” is still one. Thus, it is reasonable
to say that the probability of a “1” occurring should be one.
(b) The random variable X may, at exceedingly infrequent intervals, take values
other than 1. The random variable Y must always equal 1.
15. Problem 15.
(a) The function is clearly always positive. Its integral over the entire real line is:
Z∞ Z
1 ∞ 1 ∞
1 − 1
fX (α) dα = dα = π tan (α) = 1.
−∞ π −∞ 1 + α
2
−∞
Thus, the integral is one—as it must be.
(b) We find that:
1 1 π
Z 1
1 α 2
dα = tan −
(α) + .
FX (α) = 1+α π
2
π −∞
(c) Let us consider the integral that defines E(X) carefully. We find that:
Z 1
1 ∞ α 2
dα
π −∞ 1 + α 1 Z R1 1
= lim 2
dα
α
π 0 1
R ,R →∞ −R0 1R+ α
1 1
= lim ln(|α|)| −R0
2π R0,R1 →∞
1
= lim
2π R0,R1 →∞ ln(R0) − ln(R1).
@@
SeSiesm iciis
ism ciosloaltaiotinon
, 8 Random Signals and Noise: A Mathematical Introduction
THOSE WERE PREVIEW PAGES
TO DOWNLOAD THE FULL PDF
CLICK ON THE L.I.N.K
ON THE NEXT PAGE
@@
SeSiesm iciis
ism ciosloaltaiotinon