100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Answers

Solution Manual For PRML

Rating
-
Sold
3
Pages
253
Uploaded on
29-10-2019
Written in
2019/2020

This is a solution manual for pattern recognition and machine learning. It contains detailed solutions for the questions in the relevant book. Helpful for understanding and verifying the approach and correctness of solutions.

Institution
Course











Whoops! We can’t load your doc right now. Try again or contact support.

Connected book

Written for

Institution
Study
Course

Document information

Uploaded on
October 29, 2019
Number of pages
253
Written in
2019/2020
Type
Answers
Person
Unknown

Subjects

Content preview

S OLUTION M ANUAL F OR
PATTERN R ECOGNITION AND M ACHINE
L EARNING

E DITED B Y

ZHENGQI GAO

the State Key Lab. of ASIC and System
School of Microelectronics
Fudan University
N OV.2017

, 1


0.1 Introduction

Problem 1.1 Solution

We let the derivative of error function E with respect to vector w equals
to 0, (i.e. ∂∂w
E
= 0), and this will be the solution of w = {w i } which minimizes
error function E . To solve this problem, we will calculate the derivative of E
with respect to every w i , and let them equal to 0 instead. Based on (1.1) and
(1.2) we can obtain :

=>
∂E ∑
N
= { y( xn , w) − t n } xni = 0
∂w i n=1
=>

N ∑
N
y( xn , w) xni = xni t n
n=1 n=1
=>
N ∑
∑ M
j ∑
N
( w j xn ) xni = xni t n
n=1 j =0 n=1

=>
N ∑
∑ M
( j+ i) ∑
N
w j xn = xni t n
n=1 j =0 n=1

=>

M ∑
N
( j+ i) ∑
N
xn wj = xni t n
j =0 n=1 n=1
∑N i+ j ∑N
If we denote A i j = n=1 xn and T i = n=1 xn i t n , the equation above can
be written exactly as (1.222), Therefore the problem is solved.

Problem 1.2 Solution

This problem is similar to Prob.1.1, and the only difference is the last
term on the right side of (1.4), the penalty term. So we will do the same thing
as in Prob.1.1 :

=>
∂E ∑
N
= { y( xn , w) − t n } xni + λw i = 0
∂w i n=1
=>

M ∑
N
( j+ i) ∑
N
xn w j + λw i = xni t n
j =0 n=1 n=1

=>

M ∑ N
( j+ i) ∑
N
{ xn + δ ji λ}w j = xni t n
j =0 n=1 n=1

, 2


where
{
0 j ̸= i
δ ji
1 j=i
Problem 1.3 Solution

This problem can be solved by Bayes’ theorem. The probability of selecting
an apple P (a) :
3 1 3
P ( a) = P ( a| r ) P ( r ) + P ( a| b ) P ( b ) + P ( a| g ) P ( g ) = × 0.2 + × 0.2 + × 0.6 = 0.34
10 2 10
Based on Bayes’ theorem, the probability of an selected orange coming
from the green box P ( g| o) :

P ( o| g ) P ( g )
P ( g | o) =
P ( o)
We calculate the probability of selecting an orange P ( o) first :
4 1 3
P ( o) = P ( o| r ) P ( r ) + P ( o| b ) P ( b ) + P ( o| g ) P ( g ) = × 0.2 + × 0.2 + × 0.6 = 0.36
10 2 10
Therefore we can get :
3
P ( o| g ) P ( g ) 10 × 0. 6
P ( g | o) = = = 0.5
P ( o) 0.36
Problem 1.4 Solution

This problem needs knowledge about calculus, especially about Chain
rule. We calculate the derivative of P y ( y) with respect to y, according to
(1.27) :


d p y ( y) d ( p x ( g( y))| g‘ ( y)|) d p x ( g( y)) ‘ d | g‘ ( y)|
= = | g ( y)| + p x ( g( y)) (∗)
dy dy dy dy

The first term in the above equation can be further simplified:

d p x ( g( y)) ‘ d p x ( g( y)) d g( y) ‘
| g ( y)| = | g ( y)| (∗∗)
dy d g ( y) dy
If x̂ is the maximum of density over x, we can obtain :

d p x ( x) ¯¯
=0
dx x̂
Therefore, when y = ŷ, s.t. x̂ = g( ŷ), the first term on the right side of (∗∗)
will be 0, leading the first term in (∗) equals to 0, however because of the
existence of the second term in (∗), the derivative may not equal to 0. But

, 3


when linear transformation is applied, the second term in (∗) will vanish,
(e.g. x = a y + b). A simple example can be shown by :

p x ( x) = 2 x, x ∈ [0, 1] => x̂ = 1
And given that:
x = sin( y)
Therefore, p y ( y) = 2 sin( y) | cos( y)|, y ∈ [0, π2 ], which can be simplified :
π π
p y ( y) = sin(2 y), y ∈ [0, ] => ŷ =
2 4
However, it is quite obvious :

x̂ ̸= sin( ŷ)

Problem 1.5 Solution

This problem takes advantage of the property of expectation:

var [ f ] = E[( f ( x) − E[ f ( x)])2 ]
= E[ f ( x)2 − 2 f ( x)E[ f ( x)] + E[ f ( x)]2 ]
= E[ f ( x)2 ] − 2E[ f ( x)]2 + E[ f ( x)]2
=> var [ f ] = E[ f ( x)2 ] − E[ f ( x)]2

Problem 1.6 Solution

Based on (1.41), we only need to prove when x and y is independent,
E x,y [ x y] = E[ x]E[ y]. Because x and y is independent, we have :

p( x, y) = p x ( x) p y ( y)

Therefore:
∫ ∫ ∫ ∫
x yp( x, y) dx d y = x yp x ( x) p y ( y) dx d y
∫ ∫
= ( xp x ( x) dx)( yp y ( y) d y)
=> E x,y [ x y] = E[ x]E[ y]

Problem 1.7 Solution

This problem should take advantage of Integration by substitution.
∫ +∞ ∫ +∞
2 1 1
I = exp(− 2 x2 − 2 y2 ) dx d y
−∞ −∞ 2σ 2σ
∫ 2π ∫ +∞
1 2
= exp(− 2 r ) r dr d θ
0 0 2σ
$6.58
Get access to the full document:

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached

Get to know the seller
Seller avatar
neobit

Get to know the seller

Seller avatar
neobit Vrije Universiteit Amsterdam
Follow You need to be logged in order to follow users or courses
Sold
3
Member since
6 year
Number of followers
3
Documents
1
Last sold
4 year ago

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions