100% de satisfacción garantizada Inmediatamente disponible después del pago Tanto en línea como en PDF No estas atado a nada 4.2 TrustPilot
logo-home
Examen

Exam (elaborations) TEST BANK FOR Adaptive Filter Theory 4th Edition By Simon Haykin (Solution manual only)

Puntuación
-
Vendido
-
Páginas
339
Grado
A+
Subido en
07-11-2021
Escrito en
2021/2022

Exam (elaborations) TEST BANK FOR Adaptive Filter Theory 4th Edition By Simon Haykin (Solution manual only) CHAPTER 1 1.1 Let (1) (2) We are given that (3) Hence, substituting Eq. (3) into (2), and then using Eq. (1), we get 1.2 We know that the correlation matrix R is Hermitian; that is Given that the inverse matrix R-1 exists, we may write where I is the identity matrix. Taking the Hermitian transpose of both sides: Hence, That is, the inverse matrix R-1 is Hermitian. 1.3 For the case of a two-by-two matrix, we may ru(k) = E[u(n)u*(n – k)] ry(k) = E[y(n)y*(n – k)] y(n) = u(n + a) – u(n – a) ry(k) = E[(u(n + a) – u(n – a))(u*(n + a – k) – u*(n – a – k))] = 2ru(k) – ru(2a + k) – ru(– 2a + k) RH = R R –1RH = I RR –H = I R –H R –1 = Ru = Rs + Rν 2 For Ru to be nonsingular, we require With r12 = r21 for real data, this condition reduces to Since this is quadratic in , we may impose the following condition on for nonsingularity of Ru: where 1.4 We are given This matrix is positive definite because r11 r12 r21 r22 σ2 0 0 σ2 = + r11 σ2 + r12 r21 r22 σ2 + = det(Ru) r11 σ2 ( + ) r22 σ2 = ( + ) – r12r21 > 0 r11 σ2 ( + ) r22 σ2 ( + ) – r12r21 > 0 σ2 σ2 σ2 1 2 --(r11 + r22) 1 4Δr (r11 + r22)2 – 1 – --------------------------------------       > Δr r11r22 r12 2 = – R 1 1 1 1 = aTRa [a1,a2] 1 1 1 1 a1 a2 = a1 2 2a1a2 a2 2 = + + 3 for all nonzero values of a1 and a2 (Positive definiteness is stronger than nonnegative definiteness.) But the matrix R is singular because Hence, it is possible for a matrix to be positive definite and yet it can be singular. 1.5 (a) (1) Let (2) where a, b and C are to be determined. Multiplying (1) by (2): where IM+1 is the identity matrix. Therefore, (3) (4) (5) (6) From Eq. (4): (a1 + a2)2 = > 0 det(R) (1)2 (1)2 = – = 0 RM+1 r(0) r rH RM = RM+1 –1 a b bH C = IM+1 r(0) r rH RM a b bH C = r(0)a rH+ b = 1 ra + RMb = 0 rbH + RMC = IM r(0)bH rH+ C 0T = 4 (7) Hence, from (3) and (7): (8) Correspondingly, (9) From (5): (10) As a check, the results of Eqs. (9) and (10) should satisfy Eq. (6). We have thus shown that b RM –1= – ra a 1 r(0) rHRM –1– r = ------------------------------------ b RM –1r r(0) rHRM –1– r = – ------------------------------------ C RM –1 RM –1rbH = – RM –1 RM –1rrHRM –1 r(0) rHRM –1– r = + ------------------------------------ r(0)bH rH+ C r(0)rHRM –1 r(0) rHRM –1– r – ------------------------------------ rHRM –1 rHRM –1rrHRM –1 r(0) rHRM –1– r = + + ------------------------------------ 0T = RM+1 –1 0 0 0T RM –1 a 1 RM –1r rHRM –1 – RM –1rrHRM –1 = + 0 0 0T RM –1 a 1 RM –1– r 1 rHRM –1 = + [ – ] 5 where the scalar a is defined by Eq. (8): (b) (11) Let (12) where D, e and f are to be determined. Multiplying (11) by (12): Therefore (13) (14) (15) (16) From (14): (17) Hence, from (15) and (17): (18) Correspondingly, RM+1 RM rBT rB* r(0) = RM+1 –1 D eH e f = IM+1 RM rBT rB* r(0) = D eH e f RMD rB*eH + = I RMe rB* + f = 0 rBTe + r(0) f = 1 rBTD r(0)eH + 0T = e – RM –1rB* = f f 1 r(0) rBTRM –1rB* – = --------------------------------------------- 6 (19) From (13): (20) As a check, the results of Eqs. (19) and (20) must satisfy Eq. (16). Thus We have thus shown that where the scalar f is defined by Eq. (18). 1.6 (a) We express the difference equation describing the first-order AR process u(n) as where w1 = -a1. Solving this equation by repeated substitution, we get e = – RM –1rB* r(0) rBTRM –1rB* – --------------------------------------------- D RM –1 RM –1rB*eH = – RM –1 RM –1rB*rBTRM –1 r(0) rBTRM –1rB* – = + --------------------------------------------- rBTD r(0)eH + rBTRM –1 rBTRM –1rB*rBTRM –1 r(0) rBTRM –1rB* – ------------------------------------------------ r(0)rBTRM –1 r(0) rBTRM –1rB* – = + – --------------------------------------------- 0T = RM+1 –1 RM –1 0T 0 0 f RM –1rB*rBTRM –1 rBTRM –1 – RM –1rB* – 1 = + RM –1 0T 0 0 f –RM –1rB* 1 = + rBT RM –1 [– 1] u(n) = v(n) + w1u(n – 1) u(n) = v(n) + w1v(n – 1) + w1u(n – 2) 7 (1) Here we have used the initial condition or equivalently Taking the expected value of both sides of Eq. (1) and using for all n, we get the geometric series This result shows that if , then E[u(n)] is a function of time n. Accordingly, the AR process u(n) is not stationary. If, however, the AR parameter satisfies the condition: or then Under this condition, we say that the AR process is asymptotically stationary to order one. (b) When the white noise process v(n) has zero mean, the AR process u(n) will likewise have zero mean. Then = … v(n) w1v(n – 1) w1 2v(n – 2) … w1 n-1= + + + + v(1) u(0) = 0 u(1) = v(1) E[v(n)] = μ E[u(n)] μ w1μ w1 2μ … w1 n-1= + + + + μ μ 1 w1 n – 1 – w1 ---------------       , w1 ≠ 1 μn, w1= 1               = μ ≠ 0 a1 < 1 w1 < 1 E[(n)] μ 1 – w1 → --------------- as n → ∞ 8 (2) Substituting Eq. (1) into (2), and recognizing that for the white noise process (3) we get the geometric series When |a1| < 1 or |w1| < 1, then for large n (c) The autocorrelation function of the AR process u(n) equals E[u(n)u(n-k)]. Substituting Eq. (1) into this formula, and using Eq. (3), we get var v n ( ) [ ] σv 2 = var[u(n)] E u2 = [ (n)]. E[v(n)v(k)] σv 2 n = k 0, n k ≠      = var u n ( ) [ ] σv 2 1 w1 2 w1 4 … w1 2n-2 = ( + + + + ) σv 2 1 w1 2n – 1 w1 2 – ------------------         , w1 ≠ 1 σv 2n, w1 = 1          = var[u(n)] σv 2 1 w1 2 – ≈ --------------- σv 2 1 a1 2 – = -------------- E[u(n)u(n – k)] σv 2 w1 k w1 k+2 … w1 k+2n-2 = ( + + + ) σv 2w1 k 1 w1 2n – 1 w1 2 – ------------------         , w1 ≠ 1 σv 2n, w1 = 1          = 9 For |a1| < 1 or |w1| < 1, we may therefore express this autocorrelation function as for large n Case 1: 0 < a1 < 1 In this case, w1 = -a1 is negative, and r(k) varies with k as follows: Case 2: -1 < a1 < 0 In this case, w1 = -a1 is positive and r(k) varies with k as follows: 1.7 (a) The second-order AR process u(n) is described by the difference equation: Hence and the AR parameters equal Accordingly, we write the Yule-Walker equations as r(k) = E[u(n)u(n – k)] σv 2w1 k 1 w1 2 – ≈ --------------- -4 -3 -2 -1 0 +1 +2 +3 +4 k r(k) -4 -2 -1 0 +2 +3 +4 k r(k) -3 +1 u(n) = u(n – 1) – 0.5u(n – 2) + v(n) w1 = 1 w2 = –0.5 a1 = –1 a2 = 0.5 10 (b) Writing the Yule-Walker equations in expanded form: Solving the first relation for r(1): (1) Solving the second relation for r(2): (2) (c) Since the noise v(n) has zero mean, so will the AR process u(n). Hence, We know that (3) Substituting (1) and (2) into (3), and solving for r(0), we get 1.8 By definition, P0 = average power of the AR process u(n) r(0) r(1) r(1) r(0) 1 –0.5 r(1) r(2) = r(0) – 0.5r(1) = r(1) r(1) – 0.5r(0) = r(2) r(1) 2 3 = --r(0) r(2) 1 6 = --r(0) var[u(n)] E u2= [( n)] = r(0). σv 2 akr(k) k=0 2Σ = = r(0) + a1r(1) + a2r(2) r(0) σv 2 1 2 3 --a1 1 6 + --a2 = ---------------------------- = 1.2 11 = E[|u(n)|2] = r(0) (1) where r(0) is the autocorrelation function of u(n) for zero lag. We note that Equivalently, except for the scaling factor r(0), (2) Combining Eqs. (1) and (2): (3) 1.9 (a) The transfer function of the MA model of Fig. 2.3 is (b) The transfer function of the ARMA model of Fig. 2.4 is (c) The ARMA model reduces to an AR model when It reduces to an MA model when 1.10 We are given Taking the z-transforms of both sides: {a1, a2,…, aM} r(1) r(0) ---------- r(2) r(0) ---------- … r(M) r(0) , , , ------------       a1 a2 { , ,…, aM} {r(1), r(2), …, r(M)} P0, a1 a2 { , ,…, aM} {r(0), r(1), r(2), …, r(M)} H(z) 1 b1 *z –1 b2 *z –2 … bK * z –K = + + + + H(z) b0 * b1 *z –1 b2 *z –2 … bK * z –K + + + + 1 a1 *z –1 a2 *z –2 … aM * z –M + + + + = -------------------------------------------------------------------------------- b0 b1 = = … = bK = 0 a1 a2 = = … = aM = 0 x(n) = υ(n) + 0.75υ(n – 1) + 0.25υ(n – 2) 12 Hence, the transfer function of the MA model is (1) Using long division, we may perform the following expansion of the denominator in Eq. (1): (2) (a) M = 2 Retaining terms in Eq. (2) up to z-2, we may approximate the MA model with an AR model of order two as follows: (b) M = 5 Retaining terms in Eq. (2) up to z-5, we obtain the following approximation in the forms of an AR model of order five: X(z) 1 0.75z –1 0.25z –2 = ( + + )V(z) X(z) V(z) ----------- 1 0.75z –1 0.25z –2 = + + 1 1 0.75z –1 0.25z –2 ( + + ) –1 = --------------------------------------------------------------- 1 0.75z –1 0.25z –2 ( + + ) –1 1 3 4 --z –1 – 5 16 -----z –2 3 64 -----z –3 – 11 256 --------z –4 – 45 1024 -----------z –5 = + + 91 4096 -----------z –6 – 93 16283 --------------z –7 85 65536 --------------z –8 627 -----------------z –9 – 1541 --------------------z –10 + + + +… ≈ 1 0.75z –1 – 0.3125z –2 0.0469z –3 – 0.043z –4 – 0.0439z –5 + + 0.0222z –6 – 0.0057z –7 0.0013z –8 0.0024z –9 – 0.0015z –10 + + + X(z) V(z) ----------- 1 1 0.75z –1 – 0.3125z –2 + ≈ ---------------------------------------------------------- X(z) V(z) ----------- 1 1 0.75z –1 – 0.3125z –2 0.0469z –3 – 0.043z –4 – 0.0439z –5 + + ≈ --------------------------------------------------------------------------------------------------------------------------------------------------- 13 (c) M = 10 Finally, retaining terms in Eq. (2) up to z-10, we obtain the following approximation in the form of an AR model of order ten: where D(z) is given by the polynomial on the right-hand side of Eq. (2). 1.11 (a) The filter output is where u(n) is the tap-input vector. The average power of the filter output is therefore (b) If u(n) is extracted from a zero mean white noise of variance σ2, we have where I is the identity matrix. Hence, 1.12 (a) The process u(n) is a linear combination of Gaussian samples. Hence, u(n) is Gaussian. (b) From inverse filtering, we recognize that v(n) may also be expressed as a linear combination of samples represented by u(n). Hence, if u(n) is Gaussian, then v(n) is also Gaussian. 1.13 (a) From the Gaussian moment factoring theorem: X(z) V(z) ----------- 1 D(z) ≈ ----------- x(n) wH= u(n) E x(n) 2 [ ] E wHu(n)uH = [ (n)w] wHE u(n)uH = [ (n)]w wH= Rw R σ2= I E x(n) 2 [ ] σ 2 wH= w E u1 *( u2) k E u1 *…u1 *u2= [ …u2] 14 (1) (b) Putting u2 = u1 = u, Eq. (1) reduces to 1.14 It is not permissible to interchange the order of expectation and limiting operations in Eq. (1.113). The reason is that the expectation is a linear operation, whereas the limiting operation with respect to the number of samples N is nonlinear. 1.15 The filter output is Similarly, we may write Hence, 1.16 The mean-square value of the filter output in response to white noise input is k! E u1 [ *u2]…E u1 *= [ u2] k! E u1 *( [ u2]) k = E u2k [ ] k! E u2 ( [ ]) k = y(n) h(i)u(n – i) i Σ = y(m) h(k)u(m – k) k Σ = ry(n, m) E y(n)y* = [ (m)] E h(i)u(n – i) h* (k)u* (m – k) k Σi Σ = h(i)h* (k)E u(n – i)u* [ (m – k)] k Σ i Σ = h(i)h* (k)ru(n – i, m – k) k Σ i Σ= Po 2σ2Δω π = ----------------- 15 The value Po is linearly proportional to the filter bandwidth Δω. This relation holds irrespective of how small Δω is, compared to the mid-band frequency of the filter. 1.17 (a) The variance of the filter output is We are given radians/sec. Hence, (b) The pdf of the filter output y is 1.18 (a) We are given , k = 0,1,...,N-1 where u(n) is real valued and Hence, σy 2 2σ2Δω π = ----------------- σ2 0.1 volt2 = Δω = 2π × 1 σy 2 2 × 0.1 × 2 π ------------------------- 0.4 volt2 = = f (y) 1 2πσy -----------------e y2 – 2σy 2 ⁄ = 1 0.63 2π ---------------------e y2 – ⁄ 0.8 = Uk u(n) exp(– jnωk) n=∞ N-1 = Σ ωk 2π N = ------k 16 (1) By definition, we also have Moreover, since r(n) is periodic with period N, we may invoke the time-shifting property of the discrete Fourier transform to write Thus, recognizing that ωk = (2π/N)k, Eq. (1) reduces to (b) Part (a) shows that the complex spectral samples Uk are uncorrelated. If they are Gaussian, then they will also be statistically independent. Hence, E UkUl [ *] E u(n)u(m) jnωexp(– k + jmωl) m=0 N-1 Σ n=0 N-1 = Σ exp(– jnωk + jmωl)E[u(n)u(m)] m=0 N-1 Σ n=0 N-1 = Σ exp(– jnωk + jmωl)r(n – m) m=0 N-1 Σ n=0 N-1 = Σ ( jnωk) r(n – m) exp(– jnωk) n=0 N-1 exp Σ m=0 N-1 = Σ r(n) exp(– jnωk) n=0 N-1 Σ = Sk r(n – m) exp(– jnωk) n=0 N-1 Σ = exp(– jmωk)Sk E UkUl * [ ] Sk exp( jm(ωl – ωk)) m=0 N-1 = Σ Sk, l = k 0, otherwise    = 17 where Therefore, 1.19 The mean square value of the increment process dz(ω) is Hence E[|dz(ω)|2] is measured in watts. 1.20 The third-order cumulant of a process u(n) is = third-order moment. All odd-order moments of a Gaussian process are known to be zero; hence, f U(U0, U1, …, UN-1) 1 (2π)Ndet(Λ) -------------------------------- 1 2 --UH– ΛU = exp  U [U0, U1,…, UN-1]T = Λ 1 2 --E UUH = [ ] 1 2 = --diag(S0, S1,…, SN–1) det(Λ) 1 2N ------ Sk k=0 N-1 = Π f U(U0, U1,…, UN-1) 1 (2π)N2 –N Sk k=0 N-1 Π --------------------------------------- 1 2 -- k=0 N-1 Σ Uk 2 1 2 --Sk – ------------         = exp π –N Uk 2 Sk ------------       – – lnSk k=0 N-1 Σ      = exp E dz(ω) 2 [ ] = S(ω)dω c3(τ1, τ2) = E[u(n)u(n + τ1)u(n + τ2)] 18 The fourth-order cumulant is For the special case of τ = τ1 = τ2 = τ3, the fourth-order moment of a zero-mean Gaussian process of variance σ2 is 3σ4, and its second-order moment of σ2. Hence, the fourth-order cumulant is zero. Indeed, all cumulants higher than order two are zero. 1.21 The trispectrum is Let the process be passed through a three-dimensional band-pass filter centered on ω1, ω2, and ω3. We assume that the bandwidth (along each dimension) is small compared to the respective center frequency. The average power of the filter output is proportional to the trispectrum, C4(ω1, ω2, ω3). 1.22 (a) Starting with the formula the third-order cumulant of the filter output is where is the third-order cumulant of the filter input. The bispectrum is c3(τ1, τ2) = 0 c4(τ1, τ2, τ3) = E[u(n)u(n + τ1)u(n + τ2)u(n + τ3)] – E[u(n)u(n + τ1)]E[u(n + τ2)u(n + τ3)] – E[u(n)u(n + τ2)]E[u(n + τ1)u(n + τ3)] – E[u(n)u(n + τ3)]E[u(n + τ1)u(n + τ2)] C4(ω1, ω2, ω3) c4(τ1, τ2, τ3)e – j(ω1τ1 + ω2τ2 + ω3τ3) τ3=-∞ ∞Σ τ2=-∞ ∞Σ τ1=-∞ ∞Σ = ck τ1 τ2 ( , , …, τk-1) γ k hihi τ+ 1…hi + τk-1 i=-∞ ∞Σ = c3 τ1 τ2 , ( ) γ3 hihi + τ1hi τ+ 2 i=-∞ ∞Σ = γ 3 19 Hence, (b) From this formula, we immediately deduce that 1.23 The output of a filter of impulse response hi due to an input u(i) is given by the convolution sum The third-order cumulant of the filter output is, for example, For an input sequence of independent and identically distributed random variables, we note that C3 ω1 ω2 , ( ) γ3 τ1=-∞ ∞Σ c3(τ1, τ2)e – j(ω1τ1 + ω2τ2) τ2=-∞ = Σ γ 3 hihi + τ1hi τ+ 2e – j(ω1τ1 + ω2τ2) τ2=-∞ ∞Σ τ1=-∞ ∞Σ i=-∞ ∞Σ = C3 ω1 ω2 , ( ) γ3 H e jω1    H e jω2    H* e j(ω1 + ω2)   =   arg[C3(ω1, ω2)] H e jω1     H e jω2     H e j(ω1 + ω2)   = arg + arg – arg   y(n) hiu(n – i) i Σ = C3(τ1, τ2) = E[y(n)y(n + τ1)t(n + τ2)] E hiu(n – i) hku(n + τ1 – k) hlu(n + τ2 – l) l Σ k Σ i Σ = E hiu(n – i) hk+τ1 u(n – k) hl+τ2 u(n – l) l Σ k Σ i Σ = hih k+τ1 hl+τ2 E[u(n – i)u(n – k)u(n – l)] l Σ k Σi Σ = E[u(n – i)u(n – k)u(n – l)] γ 3 i = k= l  0, otherwise     = 20 Hence, In general, we may thus write 1.24 By definition: Hence, We are told that the process u(n) is cyclostationary, which means that It follows therefore that 1.25 For α = 0, the input to the time-average cross-correlator reduces to the squared amplitude of a narrow-band filter with mid-band frequency ω. Correspondingly, the time-average cross-correlator reduces to an average power meter. Thus, for α = 0, the instrumentation of Fig. 1.16 reduces to that of Fig. 1.13. C3 τ1 τ2 , ( ) γ3 hihi+τ1 hi+τ2 i=-∞ ∞Σ = C3(τ1, τ2,…, τk-1) γ k hihi+τ1 …hi+τk-1 i=-∞ ∞Σ = r (α) (k) 1 N ---- E u(n)u* (n – k)e – j2παn [ ]e jπαk n=0 N-1 = Σ r (α) (–k) 1 N ---- E u(n)u* (n + k)e – j2παn [ ]e – jπαk n=0 N-1 = Σ r (α)* (k) 1 N ---- E u* (n)u(n – k)e j2παn [ ]e – jπαk n=0 N-1 = Σ E u(n)u* (n + k)e – j2παn [ ] E u* (n)u(n – k)e j2παn = [ ] r (α) (–k) r (α)* = (k) 21 CHAPTER 2 2.1 (a) Let wk = x + jy p(-k) = a + jb We may then write f = wk p*(-k) = (x + jy)(a - jb) = (ax + by) + j(ay - bx) Let f = u + jv with u = ax + by v = ay - bx Hence, From these results we immediately see that In other words, the product term wk p*(-k) satisfies the Cauchy-Rieman equations, and so this term is analytic. ∂u ∂x ------ = a ∂u ∂y ------ = b ∂v ∂y ----- = a ∂v ∂x ----- = –b ∂u ∂x ------ ∂v ∂y = ----- ∂v ∂x ----- ∂u ∂y = –------ 22 (b) Let f = wk*p(-k) = (x - jy) (a + jb) = (ax + by) + j(bx - ay) Let f = u + jv with u = ax + by v = bx - ay Hence, From these results we immediately see that In other words, the product term wk *p(-k) does not satisfy the Cauchy-Rieman equations, and so this term is not analytic. 2.2 (a) From the Wiener-Hopf equation, we have (1) ∂u ∂x ------ = a ∂u ∂y ------ = b ∂v ∂x ----- = b ∂v ∂y ----- = –a ∂u ∂x ------ ∂v ∂y ≠ ----- ∂v ∂x ----- – ∂u ∂y = ------ wo R –1= p 23 We are given Hence, the inverse matrix R-1 is Using Eq. (1), we therefore get (b) The minimum mean-square error is R 1 0.5 0.5 1 = p 0.5 0.25 = R–1 1 0.5 0.5 1 –1 = 1 0.75 ---------- 1 –0.5 –0.5 1 = wo 1 0.75 ---------- 1 –0.5 –0.5 1 0.5 0.25 = 1 3 -- 1 –0.5 –0.5 1 2 1 = 1 3 -- 1.5 0 = 0.5 0 = Jmin σd 2 pH= – wo 24 (c) The eigenvalues of matrix R are roots of the characteristic equation That is, the two roots are The associated eigenvectors are defined by R q = λ q For λ1 = 0.5, we have Expanding q11 + 0.5 q12 = 0.5 q11 0.5 q11 + q12 = 0.5 q12 Therefore, q11 = - q12 Normalizing the eigenvector q1 to unit length, we therefore have σd 2 0.5, 0.25 0.5 0 = – σd 2 = – 0.25 (1 – λ)2 (0.5)2 – = 0 λ1 0.5 and λ2 = = 1.5 1 0.5 0.5 1 q11 q12 0.5 q11 q12 = q1 1 2 ------ 1 –1 = 25 Similarly, for the eigenvalue λ2 = 1.5, we may show that Accordingly, we may express the Wiener filter in terms of its eigenvalues and eigenvectors as follows: 2.3 (a) From the Wiener-Hopf equation we have (1) We are given and Hence, the use of these values in Eq. (1) yields q2 1 2 ------ 1 1 = wo 1 λi ----qiqi H i=1 2Σ       = p 1 –1 1, –1 1 3 -- 1 1 + 1, 1      0.5 0.25 = = ( 1 –1 –1 1 1 3 + -- 1 1 1 1 ) 0.5 0.25 1 λ1 ------q1q1 H 1 λ2 ------q2q2 H p wo R –1= p R 1 0.5 0.25 0.5 1 0.5 0.25 0.5 1 = p 0.5 0.25 0.125 T = 26 (b) The minimum mean-square error is (c) The eigenvalues of matrix R are The corresponding eigenvectors constitute the orthogonal matrix: Accordingly, we may express the Wiener filter in terms of its eigenvalues and eigenvectors as follows: wo R–1= p 1 0.5 0.25 0.5 1 0.5 0.25 0.5 1 –1 0.5 0.25 0.125 = 1.33 –0.67 0 –0.67 1.67 –0.67 0 –0.67 1.33 0.5 0.25 0.125 = 0.5 0 0 T = Jmin σd 2 pH= – wo σd 2 0.5 0.25 0.125 0.5 0 0 = – σd 2 = – 0.25 λ = 0.4069, 0.75, 1.8431 Q –0.4544 –0.7071 0.5418 0.7662 0 0.6426 –0.4544 0.7071 0.5418 = 27 2.4 By definition, the correlation matrix where wo 1 λi ----qiqi H i=1 3Σ       = p 1 0.4069 ---------------- –0.4544 0.7662 –0.4544 –0.4544 0.7662 –0.4554      = 1 0.75 ---------- –0.7071 0 0.7071 + –0.7071 0 0.7071 1 1.8431 ---------------- 0.5418 0.6426 0.5418 + 0.5418 0.6426 0.5418          0.5 0.25 0.125 × 1 0.4069 ---------------- 0.2065 –0.3482 0.2065 –0.3482 0.5871 –0.3482 0.2065 0.3482 – 0.2065      = 1 0.75 ---------- 0.5 0 –0.5 0 0 0 –0.5 0 0.5 + 1 1.8431 ---------------- 0.2935 0.3482 0.2935 0.3482 0.4129 0.3482 0.2935 0.3482 0.2935 +      0.5 0.25 0.125 R E u(n)uH = [ (n)] 28 Invoking the ergodicity theorem, Likewise, we may compute the cross-correlation vector as the time average The tap-weight vector of the Wiener filter is thus defined by

Mostrar más Leer menos
Institución
Grado











Ups! No podemos cargar tu documento ahora. Inténtalo de nuevo o contacta con soporte.

Escuela, estudio y materia

Institución
Grado

Información del documento

Subido en
7 de noviembre de 2021
Número de páginas
339
Escrito en
2021/2022
Tipo
Examen
Contiene
Preguntas y respuestas

Temas

Vista previa del contenido

, CHAPTER 1

1.1 Let

r u ( k ) = E [ u ( n )u * ( n – k ) ] (1)


r y ( k ) = E [ y ( n )y * ( n – k ) ] (2)

We are given that

y(n) = u(n + a) – u(n – a) (3)

Hence, substituting Eq. (3) into (2), and then using Eq. (1), we get

r y(k ) = E [(u(n + a) – u(n – a))(u*(n + a – k ) – u*(n – a – k ))]

= 2r u ( k ) – r u ( 2a + k ) – r u ( – 2a + k )

1.2 We know that the correlation matrix R is Hermitian; that is

H
R = R

Given that the inverse matrix R-1 exists, we may write

–1 H
R R = I

where I is the identity matrix. Taking the Hermitian transpose of both sides:

–H
RR = I

Hence,

–H –1
R = R

That is, the inverse matrix R-1 is Hermitian.

1.3 For the case of a two-by-two matrix, we may

Ru = Rs + Rν



1

, 2
=
r 11 r 12
+ σ 0
r 21 r 22 2
0 σ


2
r 11 + σ r 12
=
2
r 21 r 22 + σ

For Ru to be nonsingular, we require

2 2
det ( R u ) = ( r 11 + σ ) ( r 22 + σ ) – r 12 r 21 > 0

With r12 = r21 for real data, this condition reduces to

2 2
( r 11 + σ ) ( r 22 + σ ) – r 12 r 21 > 0

2 2
Since this is quadratic in σ , we may impose the following condition on σ for nonsingu-
larity of Ru:


2 1  4∆ r 
σ > --- ( r 11 + r 22 )  1 – --------------------------------------
2  ( r + r ) – 1
2
11 22

2
where ∆ r = r 11 r 22 – r 12

1.4 We are given


R = 1 1
1 1

This matrix is positive definite because


T a
a Ra = [ a 1 ,a 2 ] 1 1 1
1 1 a2

2 2
= a 1 + 2a 1 a 2 + a 2



2

, 2
= ( a 1 + a 2 ) > 0 for all nonzero values of a1 and a2

(Positive definiteness is stronger than nonnegative definiteness.)

But the matrix R is singular because

2 2
det ( R ) = ( 1 ) – ( 1 ) = 0

Hence, it is possible for a matrix to be positive definite and yet it can be singular.

1.5 (a)

H
r(0) r
R M+1 = (1)
r RM

Let

a H
–1
R M+1 = b (2)
b C

where a, b and C are to be determined. Multiplying (1) by (2):

H
r(0) r a b
H
I M+1 =
r RM b C

where IM+1 is the identity matrix. Therefore,

H
r ( 0 )a + r b = 1 (3)

ra + R M b = 0 (4)

H
rb + R M C = I M (5)

H H T
r ( 0 )b + r C = 0 (6)

From Eq. (4):




3
$13.99
Accede al documento completo:

100% de satisfacción garantizada
Inmediatamente disponible después del pago
Tanto en línea como en PDF
No estas atado a nada

Conoce al vendedor

Seller avatar
Los indicadores de reputación están sujetos a la cantidad de artículos vendidos por una tarifa y las reseñas que ha recibido por esos documentos. Hay tres niveles: Bronce, Plata y Oro. Cuanto mayor reputación, más podrás confiar en la calidad del trabajo del vendedor.
Expert001 Chamberlain School Of Nursing
Seguir Necesitas iniciar sesión para seguir a otros usuarios o asignaturas
Vendido
798
Miembro desde
4 año
Número de seguidores
566
Documentos
1190
Última venta
1 día hace
Expert001

High quality, well written Test Banks, Guides, Solution Manuals and Exams to enhance your learning potential and take your grades to new heights. Kindly leave a review and suggestions. We do take pride in our high-quality services and we are always ready to support all clients.

4.2

159 reseñas

5
104
4
18
3
14
2
7
1
16

Recientemente visto por ti

Por qué los estudiantes eligen Stuvia

Creado por compañeros estudiantes, verificado por reseñas

Calidad en la que puedes confiar: escrito por estudiantes que aprobaron y evaluado por otros que han usado estos resúmenes.

¿No estás satisfecho? Elige otro documento

¡No te preocupes! Puedes elegir directamente otro documento que se ajuste mejor a lo que buscas.

Paga como quieras, empieza a estudiar al instante

Sin suscripción, sin compromisos. Paga como estés acostumbrado con tarjeta de crédito y descarga tu documento PDF inmediatamente.

Student with book image

“Comprado, descargado y aprobado. Así de fácil puede ser.”

Alisha Student

Preguntas frecuentes