100% de satisfacción garantizada Inmediatamente disponible después del pago Tanto en línea como en PDF No estas atado a nada 4,6 TrustPilot
logo-home
Examen

Exam (elaborations) TEST BANK FOR Applied Linear Algebra By Peter J. Olver and Chehrzad Shakiban (Instructor's Solution Manual)

Puntuación
-
Vendido
-
Páginas
358
Grado
A+
Subido en
07-11-2021
Escrito en
2021/2022

Exam (elaborations) TEST BANK FOR Applied Linear Algebra By Peter J. Olver and Chehrzad Shakiban (Instructor's Solution Manual) Applied Linear Algebra Instructor’s Solutions Manual by Peter J. Olver and Chehrzad Shakiban Table of Contents Chapter Page 1. Linear Algebraic Systems . . . . . . . . . . . . . . . . . . . . . . . . . 1 2. Vector Spaces and Bases . . . . . . . . . . . . . . . . . . . . . . . . . 46 3. Inner Products and Norms . . . . . . . . . . . . . . . . . . . . . . . 78 4. Minimization and Least Squares Approximation . . . . . . . 114 5. Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 6. Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 7. Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 8. Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 9. Linear Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . 262 10. Iteration of Linear Systems . . . . . . . . . . . . . . . . . . . . . . 306 11. Boundary Value Problems in One Dimension . . . . . . . . . 346 1 Solutions — Chapter 1 1.1.1. (a) Reduce the system to x − y = 7, 3y = −4; then use Back Substitution to solve for x = 17 3 , y = − 4 3 . (b) Reduce the system to 6u + v = 5, − 5 2 v = 5 2 ; then use Back Substitution to solve for u = 1, v = −1. (c) Reduce the system to p + q − r = 0, −3q + 5r = 3, −r = 6; then solve for p = 5, q = −11, r = −6. (d) Reduce the system to 2u − v + 2w = 2, − 3 2 v + 4w = 2, −w = 0; then solve for u = 1 3 , v = − 4 3 ,w = 0. (e) Reduce the system to 5x1 + 3x2 − x3 = 9, 1 5 x2 − 2 5 x3 = 2 5 , 2x3 = −2; then solve for x1 = 4, x2 = −4, x3 = −1. (f ) Reduce the system to x + z − 2w = −3, −y + 3w = 1, −4z − 16w = −4, 6w = 6; then solve for x = 2, y = 2, z = −3,w = 1. (g) Reduce the system to 3x1 + x2 = 1, 8 3 x2 + x3 = 2 3 , 21 8 x3 + x4 = 34 , 55 21 x4 = 5 7 ; then solve for x1 = 3 11 , x2 = 2 11 , x3 = 2 11 , x4 = 3 11 . 1.1.2. Plugging in the given values of x, y and z gives a+2b−c = 3, a−2−c = 1, 1+2b+c = 2. Solving this system yields a = 4, b = 0, and c = 1. ~ 1.1.3. (a) With Forward Substitution, we just start with the top equation and work down. Thus 2x = −6 so x = −3. Plugging this into the second equation gives 12 + 3y = 3, and so y = −3. Plugging the values of x and y in the third equation yields −3 + 4(−3) − z = 7, and so z = −22. (b) We will get a diagonal system with the same solution. (c) Start with the last equation and, assuming the coefficient of the last variable is 6= 0, use the operation to eliminate the last variable in all the preceding equations. Then, again assuming the coefficient of the next-to-last variable is non-zero, eliminate it from all but the last two equations, and so on. (d) For the systems in Exercise 1.1.1, the method works in all cases except (c) and (f ). Solving the reduced system by Forward Substitution reproduces the same solution (as it must): (a) The system reduces to 3 2 x = 17 2 , x + 2y = 3. (b) The reduced system is 15 2 u = 15 2 , 3u − 2v = 5. (c) The method doesn’t work since r doesn’t appear in the last equation. (d) Reduce the system to 3 2 u = 1 2 , 7 2 u − v = 5 2 , 3u − 2w = −1. (e) Reduce the system to 2 3 x1 = 8 3 , 4x1 + 3x2 = 4, x1 + x2 + x3 = −1. (f ) Doesn’t work since, after the first reduction, z doesn’t occur in the next to last equation. (g) Reduce the system to 55 21 x1 = 5 7 , x2 + 21 8 x3 = 3 4 , x3 + 8 3 x4 = 2 3 , x3 + 3x4 = 1. 1.2.1. (a) 3 × 4, (b) 7, (c) 6, (d) (−2 0 1 2 ), (e) 0 B@ 0 2 −6 1 CA . 1 1.2.2. (a) 0 B@ 1 2 3 4 5 6 7 8 9 1 CA , (b) 1 2 3 1 4 5 ! , (c) 0 B@ 1 2 3 4 4 5 6 7 7 8 9 3 1 CA , (d) ( 1 2 3 4 ), (e) 0 B@ 1 2 3 1 CA , (f ) ( 1 ). 1.2.3. x = −1 3 , y = 4 3 , z = −1 3 , w = 2 3 . 1.2.4. (a) A = 1 −1 1 2 ! , x = x y ! , b = 7 3 ! ; (b) A = 6 1 3 −2 ! , x = u v ! , b = 5 5 ! ; (c) A = 0 B@ 1 1 −1 2 −1 3 −1 −1 0 1 CA , x = 0 B@ p q r 1 CA , b = 0 B@ 0 3 6 1 CA ; (d) A = 0 B@ 2 1 2 −1 3 3 4 −3 0 1 CA , x = 0 B@ u v w 1 CA , b = 0 B@ 3 −2 7 1 CA ; (e) A = 0 B@ 5 3 −1 3 2 −1 1 1 2 1 CA , x = 0 B@ x1 x2 x3 1 CA , b = 0 B@ 9 5 −1 1 CA ; (f ) A = 0 BBB@ 1 0 1 −2 2 −1 2 −1 0 −6 −4 2 1 3 2 −1 1 CCCA , x = 0 BBB@ x y z w 1 CCCA , b = 0 BBB@ −3 3 2 1 1 CCCA ; (g) A = 0 BBB@ 3 1 0 0 1 3 1 0 0 1 3 1 0 0 1 3 1 CCCA , x = 0 BBB@ x1 x2 x3 x4 1 CCCA , b = 0 BBB@ 1 1 1 1 1 CCCA . 1.2.5. (a) x − y = −1, 2x + 3y = −3. The solution is x = −6 5 , y = −1 5 . (b) u + w = −1, u + v = −1, v + w = 2. The solution is u = −2, v = 1, w = 1. (c) 3x1 − x3 = 1, −2x1 − x2 = 0, x1 + x2 − 3x3 = 1. The solution is x1 = 1 5 , x2 = −2 5 , x3 = −2 5 . (d) x + y − z − w = 0, −x + z + 2w = 4, x − y + z = 1, 2y − z + w = 5. The solution is x = 2, y = 1, z = 0, w = 3. 1.2.6. (a) I = 0 BBB@ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 CCCA , O = 0 BBB@ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 CCCA . (b) I + O = I , IO = OI = O. No, it does not. 1.2.7. (a) undefined, (b) undefined, (c) 3 6 0 −1 4 2 ! , (d) undefined, (e) undefined, (f ) 0 B@1 11 9 3 −12 −12 7 8 8 1 CA , (g) undefined, (h) 0 B@ 9 −2 14 −8 6 −17 12 −3 28 1 CA , (i) undefined. 2 1.2.8. Only the third pair commute. 1.2.9. 1, 6, 11, 16. 1.2.10. (a) 0 B@ 1 0 0 0 0 0 0 0 −1 1 CA , (b) 0 BBB@ 2 0 0 0 0 −2 0 0 0 0 3 0 0 0 0 −3 1 CCCA . 1.2.11. (a) True, (b) true. ~ 1.2.12. (a) Let A = x y z w ! . Then AD = ax by az bw ! = ax ay bz bw ! = DA, so if a 6= b these are equal if and only if y = z = 0. (b) Every 2 × 2 matrix commutes with a 0 0 a ! = a I . (c) Only 3 × 3 diagonal matrices. (d) Any matrix of the form A = 0 B@ x 0 0 0 y z 0 u v 1 CA . (e) Let D = diag (d1, . . . , dn). The (i, j) entry of AD is aij dj . The (i, j) entry of DA is di aij . If di 6= dj , this requires aij = 0, and hence, if all the di’s are different, then A is diagonal. 1.2.13.We need A of size m × n and B of size n × m for both products to be defined. Further, AB has size m × m while BA has size n × n, so the sizes agree if and only if m = n. 1.2.14. B = x y 0 x ! where x, y are arbitrary. 1.2.15. (a) (A + B)2 = (A + B)(A + B) = AA + AB + BA + BB = A2 + 2AB + B2, since AB = BA. (b) An example: A = 1 2 0 1 ! , B = 0 0 1 0 ! . 1.2.16. If AB is defined and A is m×n matrix, then B is n×p matrix and AB is m×p matrix; on the other hand if BA is defined we must have p = m and BA is n × n matrix. Now, since AB = BA, we must have p = m = n. 1.2.17. AOn×p = Om×p, Ol×m A = Ol×n. 1.2.18. The (i, j) entry of the matrix equation cA = O is caij = 0. If any aij 6= 0 then c = 0, so the only possible way that c 6= 0 is if all aij = 0 and hence A = O. 1.2.19. False: for example, 1 0 0 0 ! 0 0 1 0 ! = 0 0 0 0 ! . 1.2.20. False — unless they commute: AB = BA. 1.2.21. Let v be the column vector with 1 in its jth position and all other entries 0. Then Av is the same as the jth column of A. Thus, the hypothesis implies all columns of A are 0 and hence A = O. 1.2.22. (a) A must be a square matrix. (b) By associativity, AA2 = AAA = A2A = A3. (c) The na¨ıve answer is n − 1. A more sophisticated answer is to note that you can com- pute A2 = AA, A4 = A2A2, A8 = A4A4, and, by induction, A2r with only r matrix multiplications. More generally, if the binary expansion of n has r+1 digits, with s nonzero digits, then we need r + s − 1 multiplications. For example, A13 = A8A4A since 13 is 1101 in binary, for a total of 5 multiplications: 3 to compute A2,A4 and A8, and 2 more to mul- tiply them together to obtain A13. 3 1.2.23. A = 0 1 0 0 ! . } 1.2.24. (a) If the ith row of A has all zero entries, then the (i, j) entry of AB is ai1b1j + · · · + ainbnj = 0b1j + · · · + 0bnj = 0, which holds for all j, so the ith row of AB will have all 0’s. (b) If A = 1 1 0 0 ! , B = 1 2 3 4 ! , then BA = 1 1 3 3 ! . 1.2.25. The same solution X = −1 1 3 −2 ! in both cases. 1.2.26. (a) 4 5 1 2 ! , (b) 5 −1 −2 1 ! . They are not the same. 1.2.27. (a) X = O. (b) Yes, for instance, A = 1 2 0 1 ! , B = 3 2 −2 −1 ! , X = 1 0 1 1 ! . 1.2.28. A = (1/c) I when c 6= 0. If c = 0 there is no solution. } 1.2.29. (a) The ith entry of Az is 1 ai1+1 ai2+· · ·+1 ain = ai1+· · ·+ain, which is the ith row sum. (b) Each row of W has n − 1 entries equal to 1 n and one entry equal to 1 − n n and so its row sums are (n − 1) 1 n + 1 − n n = 0. Therefore, by part (a), W z = 0. Consequently, the row sums of B = AW are the entries of Bz = AW z = A0 = 0, and the result follows. (c) z = 0 B@ 1 1 1 1 CA , and so Az = 0 B@ 1 2 −1 2 1 3 −4 5 −1 1 CA 0 B@ 1 1 1 1 CA = 0 B@ 2 6 0 1 CA , while B = AW = 0 BB@ 1 2 −1 2 1 3 − 4 5 −1 1 CCA 0 BBB@ − 2 3 1 3 1 3 1 3 −2 3 1 3 1 3 1 3 −2 3 1 CCCA = 0 BB@ − 1 3 −4 3 5 3 0 1 −1 4 −5 1 1 CCA , and so Bz = 0 B@ 0 0 0 1 CA . } 1.2.30. Assume A has size m×n, B has size n×p and C has size p×q. The (k, j) entry of BC is pX l=1 bklclj , so the (i, j) entry of A(BC) is Xn k=1 aik 0 @ pX l=1 bklclj 1 A = Xn k=1 pX l=1 aikbklclj . On the other hand, the (i, l) entry of AB is Xk i=1 aikbkl, so the (i, j) entry of (AB)C is pX l=1 0 @ Xn k=1 aikbkl 1 Aclj = Xn k=1 pX l=1 aikbklclj . The two results agree, and so A(BC) = (AB)C. Remark: A more sophisticated, simpler proof can be found in Exercise 7.1.44. ~ 1.2.31. (a) We need AB and BA to have the same size, and so this follows from Exercise 1.2.13. (b) AB − BA = O if and only if AB = BA. (c) (i) −1 2 6 1 ! , (ii) 0 0 0 0 ! , (iii) 0 B@ 0 1 1 1 0 1 −1 1 0 1 CA ; (d) (i) [ cA + dB,C ] = (cA + dB)C − C(cA + dB) = c(AC − CA) + d(BC − CB) = c [ A,B ] + d [ B,C ], [ A, cB + dC ] = A(cB + dC) − (cB + dC)A = c(AB − BA) + d(AC − CA) = c [ A,B ] + d [ A,C ]. (ii) [ A,B ] = AB − BA = −(BA − AB) = −[ B,A ]. 4 (iii) h [ A,B ],C i = (AB − BA)C − C (AB − BA) = ABC − BAC − CAB + CBA, h [ C,A ],B i = (CA − AC)B − B(CA − AB) = CAB − ACB − BCA + BAC, h [ B,C ],A i = (BC − CB)A − A(BC − CB) = BCA − CBA − ABC + ACB. Summing the three expressions produces O. } 1.2.32. (a) (i) 4, (ii) 0, (b) tr(A + B) = Xn i=1 (aii + bii) = Xn i=1 aii + Xn i=1 bii = trA + trB. (c) The diagonal entries of AB are Xn j=1 aij bji, so tr(AB) = Xn i=1 Xn j=1 aij bji; the diagonal entries of BA are Xn i=1 bji aij , so tr(BA) = Xn i=1 Xn j=1 bji aij . These double summations are clearly equal. (d) trC = tr(AB − BA) = trAB − trBA = 0 by part (a). (e) Yes, by the same proof. } 1.2.33. If b = Ax, then bi = ai1x1 + ai2x2 + · · · + ainxn for each i. On the other hand, cj = (a1j , a2j , . . . , anj )T , and so the ith entry of the right hand side of (1.13) is x1ai1 + x2ai2 + · · · + xnain, which agrees with the expression for bi. ~ 1.2.34. (a) This follows by direct computation. (b) (i) −2 1 3 2 ! 1 −2 1 0 ! = −2 3 ! ( 1 −2 ) + 1 2 ! ( 1 0 ) = −2 4 3 −6 ! + 1 0 2 0 ! = −1 4 5 −6 ! . (ii) 1 −2 0 −3 −1 2 !0 B@ 2 5 −3 0 1 −1 1 CA = 1 −3 ! ( 2 5 ) + −2 −1 ! (−3 0 ) + 0 2 ! ( 1 −1 ) = 2 5 −6 −15 ! + 6 0 3 0 ! + 0 0 2 −2 ! = 8 5 −1 −17 ! . 0 (iii) B@ 3 −1 1 −1 2 1 1 1 −5 1 CA 0 B@ 2 3 0 3 −1 4 0 4 1 1 CA = 0 B@ 3 −1 1 1 CA ( 2 3 0 ) + 0 B@ −1 2 1 1 CA ( 3 −1 4 ) + 0 B@ 1 1 −5 1 CA ( 0 4 1 ) = 0 B@ 6 9 0 −2 −3 0 2 3 0 1 CA + 0 B@ −3 1 −4 6 −2 8 3 −1 4 1 CA + 0 B@ 0 4 1 0 4 1 0 −20 −5 1 CA = 0 B@ 3 14 −3 4 −1 9 5 −18 −1 1 CA . (c) If we set B = x, where x is an n × 1 matrix, then we obtain (1.14). (d) The (i, j) entry of AB is Xn k=1 aikbkj . On the other hand, the (i, j) entry of ck rk equals the product of the ith entry of ck, namely aik, with the jth entry of rk, namely bkj . Summing these entries, aikbkj , over k yields the usual matrix product formula. ~ 1.2.35. (a) p(A) = A3 − 3A + 2 I , q(A) = 2A2 + I . (b) p(A) = −2 −8 4 6 ! , q(A) = −1 0 0 −1 ! . (c) p(A)q(A) = (A3 − 3A + 2 I )(2A2 + I ) = 2A5 − 5A3 + 4A2 − 3A + 2 I , while p(x)q(x) = 2x5 − 5x3 + 4x2 − 3x + 2. (d) True, since powers of A mutually commute. For the particular matrix from (b), p(A) q(A) = q(A) p(A) = 2 8 −4 −6 ! . 5 ~ 1.2.36. (a) Check that S2 = A by direct computation. Another example: S = 2 0 0 2 ! . Or, more generally, 2 times any of the matrices in part (c). (b) S2 is only defined if S is square. (c) Any of the matrices ±1 0 0 ±1 ! , a b c −a ! , where a is arbitrary and b c = 1 − a2. (d) Yes: for example 0 −1 1 0 ! . ~ 1.2.37. (a) M has size (i+j)×(k+l). (b) M = 0 BBBBB@ 1 1 −1 3 0 1 1 1 3 −2 2 0 1 1 −1 1 CCCCCA . (c) Since matrix addition is done entry-wise, adding the entries of each block is the same as adding the blocks. (d) X has size k × m, Y has size k × n, Z has size l × m, and W has size l × n. Then AX + BZ will have size i × m. Its (p, q) entry is obtained by multiplying the pth row of M times the qth column of P, which is ap1x1q + · · · + apixiq + bp1z1q + · · · + bplzlq and equals the sum of the (p, q) entries of AX and BZ. A similar argument works for the remaining three blocks. (e) For example, if X = (1), Y = ( 2 0 ), Z = 0 1 ! , W = 0 −1 1 0 ! , then P = 0 B@ 1 2 0 0 0 −1 1 1 0 1 CA , and so MP = 0 BBBBB@ 0 1 −1 4 7 0 4 5 −1 −2 −4 −2 0 1 −1 1 CCCCCA . The individual block products are 0 4 ! = 1 3 ! (1) + 1 −1 0 1 ! 0 1 ! , 0 B@ 4 −2 0 1 CA = 0 B@ 1 −2 1 1 CA (1) + 0 B@ 1 3 2 0 1 −1 1 CA 0 1 ! , 1 −1 7 0 ! = 1 3 ! ( 2 0 ) + 1 −1 0 1 ! 0 −1 1 0 ! , 0 B@ 5 −1 −4 −2 1 −1 1 CA= 0 B@ 1 −2 1 1 CA ( 2 0 ) + 0 B@ 1 3 2 0 1 −1 1 CA 0 −1 1 0 ! . 1.3.1. (a) 1 7 −2 −9 ˛˛˛˛˛ 4 2 ! 2R1+R2 −! 1 7 0 5 ˛˛˛˛˛ 4 10 ! . Back Substitution yields x2 = 2, x1 = −10. (b) 3 −5 2 1 ˛˛˛˛˛ −1 8 ! − 2 3R1+R2 −! 3 −5 0 13 3 ˛˛˛˛˛ −1 26 3 ! . Back Substitution yields w = 2, z = 3. (c) 0 B@ 1 −2 1 0 2 −8 −4 5 9 ˛˛˛˛˛˛˛ 0 8 −9 1 CA 4R1+R3 −! 0 B@ 1 −2 1 0 2 −8 0 −3 13 ˛˛˛˛˛˛˛ 0 8 −9 1 CA 3 2R2+R3 −! 0 B@ 1 −2 1 0 2 −8 0 0 1 ˛˛˛˛˛˛˛ 0 8 3 1 CA . Back Substitution yields z = 3, y = 16, x = 29. (d) 0 B@ 1 4 −2 −2 0 −3 3 −2 2 ˛˛˛˛˛˛˛ 1 −7 −1 1 CA 2R1+R2 −! 0 B@ 1 4 −2 0 8 −7 3 −2 2 ˛˛˛˛˛˛˛ 1 −5 −1 1 CA −3R1+R3 −! 0 B@ 1 4 −2 0 8 −7 0 −14 8 ˛˛˛˛˛˛˛ 1 −5 −4 1 CA 7 4R2+R3 −! 0 B@ 1 4 −2 0 8 −7 0 0 −17 4 ˛˛˛˛˛˛˛ 1 −5 −51 4 1 CA . Back Substitution yields r = 3, q = 2, p = −1. 6 (e) 0 BBB@ 1 0 −2 0 0 1 0 −1 0 −3 2 0 −4 0 0 7 ˛˛˛˛˛˛˛˛˛ −1 2 0 −5 1 CCCA reduces to 0 BBB@ 1 0 −2 0 0 1 0 −1 0 0 2 −3 0 0 0 −5 ˛˛˛˛˛˛˛˛˛ −1 2 6 15 1 CCCA . Solution: x4 = −3, x3 = −3 2 , x2 = −1, x1 = −4. (f ) 0 BBB@ −1 3 −1 1 1 −1 3 −1 0 1 −1 4 4 −1 1 0 ˛˛˛˛˛˛˛˛˛ −2 0 7 5 1 CCCA reduces to 0 BBB@ −1 3 −1 1 0 2 2 0 0 0 −2 4 0 0 0 −24 ˛˛˛˛˛˛˛˛˛ −2 −2 8 −48 1 CCCA . Solution: w = 2, z = 0, y = −1, x = 1. 1.3.2. (a) 3x + 2y = 2, −4x − 3y = −1; solution: x = 4, y = −5, (b) x + 2y = −3, −x + 2y + z = −6, −2x − 3z = 1; solution: x = 1, y = −2, z = −1, (c) 3x − y + 2z = −3, −2y − 5z = −1, 6x − 2y + z = −3; solution: x = 2 3 , y = 3, z = −1, (d) 2x − y = 0, −x + 2y − z = 1, −y + 2z − w = 1, −z + 2w = 0; solution: x = 1, y = 2, z = 2, w = 1. 1.3.3. (a) x = 17 3 , y = −4 3 ; (b) u = 1, v = −1; (c) u = 3 2 , v = −1 3 , w = 1 6 ; (d) x1 = 11 3 , x2 = −10 3 , x3 = −2 3 ; (e) p = −2 3 , q = 19 6 , r = 5 2 ; (f ) a = 1 3 , b = 0, c = 4 3 , d = −2 3 ; (g) x = 1 3 , y = 7 6 , z = −8 3 , w = 9 2 . 1.3.4. Solving 6 = a + b + c, 4 = 4a + 2b + c, 0 = 9a + 3b + c, yields a = −1, b = 1, c = 6, so y = −x2 + x + 6. 1.3.5. (a) Regular: 2 1 1 4 ! −! 2 1 0 7 2 ! . (b) Not regular. (c) Regular: 0 B@ 3 −2 1 −1 4 −3 3 −2 5 1 CA −! 0 B@ 3 −2 1 0 10 3 −8 3 0 0 4 1 CA . (d) Not regular: 0 B@ 1 −2 3 −2 4 −1 3 −1 2 1 CA −! 0 B@ 1 −2 3 0 0 5 0 5 −7 1 CA . (e) R0egular: BBB@ 1 3 −3 0 −1 0 −1 2 3 3 −6 1 2 3 −3 5 1 CCCA −! 0 BBB@ 1 3 −3 0 0 3 −4 2 0 −6 3 1 0 −3 3 5 1 CCCA −! 0 BBB@ 1 3 −3 0 0 3 −4 2 0 0 −5 5 0 0 −1 7 1 CCCA −! 0 BBB@ 1 3 −3 0 0 3 −4 2 0 0 −5 5 0 0 0 6 1 CCCA . 1.3.6. (a) −i 1 + i 1 − i 1 ˛˛˛˛˛ −1 −3 i ! −! −i 1 + i 0 1 − 2 i ˛˛˛˛˛ −1 1 − 2 i ! ; use Back Substitution to obtain the solution y = 1, x = 1 − 2 i . (b) 0 B@ i 0 1 − i 0 2 i 1 + i −1 2 i i ˛˛˛˛˛˛˛ 2 i 2 1 − 2 i 1 CA −! 0 B@ i 0 1 − i 0 2 i 1 + i 0 0 −2 − i ˛˛˛˛˛˛˛ 2 i 2 1 − 2 i 1 CA . solution: z = i , y = −1 2 − 3 2 i , x = 1 + i . (c) 1 − i 2 −i 1 + i ˛˛˛˛˛ i −1 ! −! 1 − i 2 0 2 i ˛˛˛˛˛ i −3 2 − 12 i ! ; solution: y = −1 4 + 34 i , x = 1 2 . 7 (d) 0 B@ 1 + i i 2 + 2 i 1 − i 2 i 3 − 3 i i 3 − 11 i ˛˛˛˛˛˛˛ 0 0 6 1 CA −! 0 B@ 1 + i i 2 + 2 i 0 1 −2 + 3 i 0 0 −6 + 6 i ˛˛˛˛˛˛˛ 0 0 6 1 CA ; solution: z = −1 2 − 12 i , y = −5 2 + 12 i , x = 5 2 + 2 i . 1.3.7. (a) 2x = 3, −y = 4, 3z = 1, u = 6, 8v = −24. (b) x = 3 2 , y = −4, z = 1 3 , u = 6, v = −3. (c) You only have to divide by each coefficient to find the solution. } 1.3.8. 0 is the (unique) solution since A0 = 0. Ä 1.3.9. Back Substitution start set xn = cn/unn for i = n − 1 to 1 with increment −1 set xi = 1 uii 0 @ci − iX+1 j=1 uijxj 1 A next j end 1.3.10. Since a11 a12 0 a22 ! b11 b12 0 b22 ! = a11b11 a11b12 + a12b22 0 a22b22 ! , b11 b12 0 b22 ! a11 a12 0 a22 ! = a11b11 a22b12 + a12b11 0 a22b22 ! , the matrices commute if and only if a11b12 + a12b22 = a22b12 + a12b11, or (a11 − a22)b12 = a12(b11 − b22). 1.3.11. Clearly, any diagonal matrix is both lower and upper triangular. Conversely, A being lower triangular requires that aij = 0 for i j; A upper triangular requires that aij = 0 for i j. If A is both lower and upper triangular, aij = 0 for all i 6= j, which implies A is a diagonal matrix. } 1.3.12. (a) Set lij = ( aij , i j, 0, i · j, , uij = ( aij , i j, 0, i ¸ j, dij = ( aij , i = j, 0, i 6= j. (b) L = 0 B@ 0 0 0 1 0 0 −2 0 0 1 CA, D = 0 B@ 3 0 0 0 −4 0 0 0 5 1 CA , U = 0 B@ 0 1 −1 0 0 2 0 0 0 1 CA . } 1.3.13. (a) By direct computation, A2 = 0 B@ 0 0 1 0 0 0 0 0 0 1 CA , and so A3 = O. (b) Let A have size n × n. By assumption, aij = 0 whenever i j − 1. By induction, one proves that the (i, j) entries of Ak are all zero whenever i j − k. Indeed, to compute the (i, j) entry of Ak+1 = AAk you multiply the ith row of A, whose first i entries are 0, 8 by the jth column of Ak, whose first j − k − 1 entries are non-zero, and all the rest are zero, according to the induction hypothesis; therefore, if i j − k − 1, every term in the sum producing this entry is 0, and the induction is complete. In particular, for k = n, every entry of Ak is zero, and so An = O. (c) The matrix A = 1 1 −1 −1 ! has A2 = O. 1.3.14. (a) Add −2 times the second row to the first row of a 2 × n matrix. (b) Add 7 times the first row to the second row of a 2 × n matrix. (c) Add −5 times the third row to the second row of a 3 × n matrix. (d) Add 1 2 times the first row to the third row of a 3 × n matrix. (e) Add −3 times the fourth row to the second row of a 4 × n matrix. 1.3.15. (a) 0 BBB@ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 1 CCCA , (b) 0 BBB@ 1 0 0 0 0 1 0 0 0 0 1 −1 0 0 0 1 1 CCCA , (c) 0 BBB@ 1 0 0 3 0 1 0 0 0 0 1 0 0 0 0 1 1 CCCA , (d) 0 BBB@ 1 0 0 0 0 1 0 0 0 0 1 0 0 −2 0 1 1 CCCA . 1.3.16. L3L2L1 = 0 B@ 1 0 0 2 1 0 0 −1 2 1 1 CA 6= L1L2L3. 1.3.17. E3E2E1 = 0 B@ 1 0 0 −2 1 0 −2 1 2 1 1 CA , E1E2E3 = 0 B@ 1 0 0 −2 1 0 −1 1 2 1 1 CA . The second is easier to predict since its entries are the same as the corresponding entries of the Ei. 1.3.18. (a) Suppose that E adds c 6= 0 times row i to row j 6= i, while eE adds d 6= 0 times row k to row l 6= k. If r1, . . . , rn are the rows, then the effect of eE E is to replace (i) rj by rl + cri + drk for j = l; (ii) rj by rj + cri and rl by rl + (cd)ri + drj for j = k; (iii) rj by rj + cri and rl by rl + drk otherwise. On the other hand, the effect of E eEis to replace (i) rj by rl + cri + drk for j = l; (ii) rj by rj + cri + (cd)rk and rl by rl + drk for i = l; (iii) rj by rj + cri and rl by rl + drk otherwise. Comparing results, we see that E eE = eE E whenever i 6= l and j 6= k. (b) E1E2 = E2E1, E1E3 6= E3E1, and E3E2 = E2E3. (c) See the answer to part (a). 1.3.19. (a) Upper triangular; (b) both special upper and special lower triangular; (c) lower triangular; (d) special lower triangular; (e) none of the above. 1.3.20. (a) aij = 0 for all i 6= j; (b) aij = 0 for all i j; (c) aij = 0 for all i j and aii = 1 for all i; (d) aij = 0 for all i j; (e) aij = 0 for all i j and aii = 1 for all i. } 1.3.21. (a) Consider the product LM of two lower triangular n × n matrices. The last n − i entries in the ith row of L are zero, while the first j − 1 entries in the jth column of M are zero. So if i j each summand in the product of the ith row times the jth column is zero, 9 and so all entries above the diagonal in LM are zero. (b) The ith diagonal entry of LM is the product of the ith diagonal entry of L times the ith diagonal entry of M. (c) Special matrices have all 1’s on the diagonal, and so, by part (b), does their product. 1.3.22. (a) L = 1 0 −1 1 ! , U = 1 3 0 3 ! , (b) L = 1 0 3 1 ! , U = 1 3 0 −8 ! , (c) L = 0 B@ 1 0 0 −1 1 0 1 0 1 1 CA , U = 0 B@ −1 1 −1 0 2 0 0 0 3 1 CA , (d) L = 0 B@ 1 0 0 1 2 1 0 0 13 1 1 CA , U = 0 B@ 2 0 3 0 3 −1 2 0 0 76 1 CA , (e) L = 0 B@ 1 0 0 −2 1 0 −1 −1 1 1 CA , U = 0 B@ −1 0 0 0 −3 0 0 0 2 1 CA , (f ) L = 0 B@ 1 0 0 2 1 0 −3 1 3 1 1 CA , U = 0 B@ 1 0 −1 0 3 4 0 0 −13 3 1 CA , (g) L = 0 BBBB@ 1 0 0 0 0 1 0 0 −1 3 2 1 0 0 −12 3 1 1 CCCCA , U = 0 BBBB@ 1 0 −1 0 0 2 −1 −1 0 0 1 2 7 2 0 0 0 −10 1 CCCCA , (h) L = 0 BBB@ 1 0 0 0 −1 1 0 0 −2 1 1 0 3 −1 −2 1 1 CCCA , U = 0 BBB@ 1 1 −2 3 0 3 1 3 0 0 −4 1 0 0 0 1 1 CCCA , (i) L = 0 BBBBB@ 1 0 0 0 1 2 1 0 0 3 2 −3 7 1 0 1 2 1 7 − 5 22 1 1 CCCCCA , U = 0 BBBBB@ 2 1 3 1 0 7 2 −3 2 1 2 0 0 −22 7 5 7 0 0 0 35 22 1 CCCCCA . 1.3.23. (a) Add 3 times first row to second row. (b) Add −2 times first row to third row. (c) Add 4 times second row to third row. 1.3.24. (a) 0 BBB@ 1 0 0 0 2 1 0 0 3 4 1 0 5 6 7 1 1 CCCA (b) (1) Add −2 times first row to second row. (2) Add −3 times first row to third row. (3) Add −5 times first row to fourth row. (4) Add −4 times second row to third row. (5) Add −6 times second row to fourth row. (6) Add −7 times third row to fourth row. (c) Use the order given in part (b). } 1.3.25. See equation (4.51) for the general case. 1 1 t1 t2 ! = 1 0 t1 1 ! 1 1 0 t2 − t1 ! 0 B@ 1 1 1 t1 t2 t3 t21 t22 t23 1 CA = 0 B@1 0 0 t1 1 0 t21 t1 + t2 1 1 CA 0 B@ 1 1 1 0 t2 − t1 t3 − t1 0 0 (t3 − t1) (t3 − t2) 1 CA , 0 BBBBB@ 1 1 1 1 t1 t2 t3 t4 t21 t22 t23 t24 t31 t32t33 t34 1 CCCCCA = 0 BBBBB@ 1 0 0 0 t1 1 0 0 t21 t1 + t2 1 0 t31 t21 + t1 t2 + t22 t1 + t2 + t3 1 1 CCCCCA 0 BBBBB@ 1 1 1 1 0 t2 − t1 t3 − t1 t4 − t1 0 0 (t3 − t1) (t3 − t2) (t4 − t1) (t4 − t2) 0 0 0 (t4 − t1) (t4 − t2) (t4 − t3) 1 CCCCCA . 10 1.3.26. False. For instance 1 1 1 0 ! is regular. Only if the zero appear in the (1, 1) position does it automatically preclude regularity of the matrix. 1.3.27. (n − 1) + (n − 2) + · · · + 1 = n(n − 1) 2 . 1.3.28.We solve the equation 1 0 l 1 ! u1 u2 0 u3 ! = a b c d ! for u1, u2, u3, l, where a 6= 0 since A = a b c d ! is regular. This matrix equation has a unique solution: u1 = a, u2 = b, u3 = d − b c a , l = c a . } 1.3.29. The matrix factorization A = LU is 0 1 1 0 ! = 1 0 a 1 ! x y 0 z ! = x y ax ay + z ! . This implies x = 0 and ax = 1, which is impossible. } 1.3.30. (a) Let u11, . . . , unn be the pivots of A, i.e., the diagonal entries of U. Let D be the diago- nal matrix whose diagonal entries are dii = sign uii. Then B = AD is the matrix ob- tained by multiplying each column of A by the sign of its pivot. Moreover, B = LUD = L eU , where eU = UD, is the LU factorization of B. Each column of eU is obtained by multiplying it by the sign of its pivot. In particular, the diagonal entries of eU , which are the pivots of B, are uii sign uii = | uii | 0. (b) Using the same notation as in part (a), we note that C = DA is the matrix obtained by multiplying each row of A by the sign of its pivot. Moreover, C = DLU. How- ever, DL is not special lower triangular, since its diagonal entries are the pivot signs. But bL = DLD is special lower triangular, and so C = DLDDU = bL bU , where bU = DU, is the LU factorization of B. Each row of bU is obtained by multiplying it by the sign of its pivot. In particular, the diagonal entries of bU , which are the pivots of C, are uii sign uii = | uii | 0. (c) 0 B@ −2 2 1 1 0 1 4 2 3 1 CA = 0 B@ 1 0 0 −1 2 1 0 −2 6 1 1 CA 0 B@ −2 2 1 0 1 3 2 0 0 −4 1 CA , 0 B@ 2 2 −1 −1 0 −1 −4 2 −3 1 CA= 0 B@ 1 0 0 −12 1 0 −2 6 1 1 CA 0 B@ 2 2 −1 0 1 −3 2 0 0 4 1 CA , 0 B@ 2 −2 −1 1 0 1 −4 −2 −3 1 CA = 0 B@ 1 0 0 1 2 1 0 −2 −6 1 1 CA 0 B@ 2 −2 −1 0 1 3 2 0 0 4 1 CA . 1.3.31. (a) x = −1 2 3 ! , (b) x = 0 @ 1 414 1 A, (c) x = 0 B@ 0 1 0 1 CA , (d) x = 0 BBB@ − 1 CCCA , (e) x = 0 B@ −1 −1 5 2 1 CA , (f ) x = 0 B@ 0 1 −1 1 CA , (g) x = 0 BBB@ 2 1 1 0 1 CCCA , (h) x = 0 BBBBBB@ − 37 12 − 17 12 1 4 2 1 CCCCCCA , (i) x = 0 BBBBBB@ 3 35 6 35 1 7 8 35 1 CCCCCCA . 11 1.3.32. (a) L = 1 0 −3 1 ! , U = −1 3 0 11 ! ; x1 = 0 @ − 5 11 2 11 1 A, x2 = 1 1 ! , x3 = 0 @ 9 11 3 11 1 A; (b) L = 0 B@ 1 0 0 −1 1 0 1 0 1 1 CA , U = 0 B@ −1 1 −1 0 2 0 0 0 3 1 CA ; x1 = 0 B@ −1 0 0 1 CA , x2 = 0 BBB@ − 1 6 − 3253 1 CCCA ; (c) L = 0 BB@ 1 0 0 − 2 3 1 0 29 53 1 1 CCA , U = 0 BB@ 9 −2 −1 0 −1 3 1 3 0 0 −1 3 1 CCA ; x1 = 0 B@ 1 2 3 1 CA , x2 = 0 B@ −2 −9 −1 1 CA ; (d) L = 0 B@ 1 0 0 .15 1 0 .2 1.2394 1 1 CA , U = 0 B@ 2.0 .3 .4 0 .355 4.94 0 0 −.2028 1 CA ; x1 = 0 B@ .6944 −1.3889 .0694 1 CA , x2 = 0 B@ 1.1111 −82.2222 6.1111 1 CA , x3 = 0 B@ −9.3056 68.6111 −4.9306 1 CA (e) L = 0 BBB@ 1 0 0 0 0 1 0 0 −1 3 2 1 0 0 −12 −1 1 1 CCCA , U = 0 BBBB@ 1 0 −1 0 0 2 3 −1 0 0 −7 2 72 0 0 0 4 1 CCCCA ; x1 = 0 BBBBB@ 5 4 − 1 CCCCCA , x2 = 0 BBBBB@ 1 14 − 5 14 1 14 1 2 1 CCCCCA ; (f ) L = 0 BBB@ 1 0 0 0 4 1 0 0 −8 −17 9 1 0 −4 −1 0 1 1 CCCA , U = 0 BBB@ 1 −2 0 2 0 9 −1 −9 0 0 1 9 0 0 0 0 1 1 CCCA ; x1 = 0 BBB@ 1 0 4 0 1 CCCA , x2 = 0 BBB@ 1 1 3 2 1 CCCA , x3 = 0 BBB@ 10 8 41 4 1 CCCA . 1.4.1. The nonsingular matrices are (a), (c), (d), (h). 1.4.2. (a) Regular and nonsingular, (b) singular, (c) nonsingular, (d) regular and nonsingular. 1.4.3. (a) x1 = − 5 3 , x2 = − 10 3 , x3 = 5; (b) x1 = 0, x2 = −1, x3 = 2; (c) x1 = −6, x2 = 2, x3 = −2; (d) x = − 13 2 , y = − 9 2 , z = −1, w = −3; (e) x1 = −11, x2 = − 10 3 , x3 = −5, x4 = −7. 1.4.4. Solve the equations −1 = 2b+c, 3 = −2a+4b+c, −3 = 2a−b+c, for a = −4, b = −2, c = 3, giving the plane z = −4x − 2y + 3. 1.4.5. (a) Suppose A is nonsingular. If a 6= 0 and c 6= 0, then we subtract c/a times the first row from the second, producing the (2, 2) pivot entry (ad − b c)/a 6= 0. If c = 0, then the pivot entry is d and so ad − b c = ad 6= 0. If a = 0, then c 6= 0 as otherwise the first column would not contain a pivot. Interchanging the two rows gives the pivots c and b, and so ad − b c = b c 6= 0. (b) Regularity requires a 6= 0. Proceeding as in part (a), we conclude that ad − b c 6= 0 also. 1.4.6. True. All regular matrices are nonsingular. 12 } 1.4.7. Since A is nonsingular, we can reduce it to the upper triangular form with nonzero diago- nal entries (by applying the operations # 1 and # 2). The rest of argument is the same as in Exercise 1.3.8. 1.4.8. By applying the operations # 1 and # 2 to the system Ax = b we obtain an equivalent upper triangular system Ux = c. Since A is nonsingular, uii 6= 0 for all i, so by Back Sub- stitution each solution component, namely xn = cn unn and xi = 1 uii 0 @ci − Xn k=i+1 uikxk 1 A, for i = n − 1, n − 2, . . . , 1, is uniquely defined. 1.4.9. (a) P1 = 0 BBB@ 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 CCCA , (b) P2 = 0 BBB@ 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 1 CCCA , (c) No, they do not commute. (d) P1 P2 arranges the rows in the order 4, 1, 3, 2, while P2 P1 arranges them in the order 2, 4, 3, 1. 1.4.10. (a) 0 B@ 0 1 0 0 0 1 1 0 0 1 CA , (b) 0 BBB@ 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 1 CCCA , (c) 0 BBB@ 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 CCCA , (d) 0 BBBBB@ 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 1 CCCCCA . 1.4.11. The (i, j) entry of the following Multiplication Table indicates the product PiPj , where P1 = 0 B@ 1 0 0 0 1 0 0 0 1 1 CA , P2 = 0 B@ 0 1 0 0 0 1 1 0 0 1 CA , P3 = 0 B@ 0 0 1 1 0 0 0 1 0 1 CA , P4 = 0 B@ 0 1 0 1 0 0 0 0 1 1 CA , P5 = 0 B@ 0 0 1 0 1 0 1 0 0 1 CA , P6 = 0 B@ 1 0 0 0 0 1 0 1 0 1 CA . The commutative pairs are P1Pi = PiP1, i = 1, . . . , 6, and P2P3 = P3P2. P1 P2 P3 P4 P5 P6 P1 P1 P2 P3 P4 P5 P6 P2 P2 P3 P1 P6 P4 P5 P3 P3 P1 P2 P5 P6 P4 P4 P4 P5 P6 P1 P2 P3 P5 P5 P6 P4 P3 P1 P2 P6 P6 P4 P5 P2 P3 P1 1.4.12. (a) 0 BBB@ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 CCCA , 0 BBB@ 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 CCCA , 0 BBB@ 0 0 0 1 1 0 0 0 0 0 1 0 0 1 0 0 1 CCCA , 0 BBB@ 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 1 CCCA , 0 BBB@ 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 1 CCCA , 13 0 BBB@ 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 CCCA ; (b) 0 BBB@ 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 CCCA , 0 BBB@ 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 1 CCCA , 0 BBB@ 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 1 CCCA , 0 BBB@ 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 CCCA , 0 BBB@ 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 1 CCCA , 0 BBB@ 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1 CCCA ; (c) 0 BBB@ 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 1 CCCA , 0 BBB@ 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 CCCA . 1.4.13. (a) True, since interchanging the same pair of rows twice brings you back to where you started. (b) False; an example is the non-elementary permuation matrix 0 B@ 0 0 1 1 0 0 0 1 0 1 CA . (c) False; for example P = −1 0 0 −1 ! is not a permutation matrix. For a complete list of such matrices, see Exercise 1.2.36. 1.4.14. (a) Only when all the entries of v are different; (b) only when all the rows of A are different. 1.4.15. (a) 0 B@ 1 0 0 0 0 1 0 1 0 1 CA . (b) True. (c) False — AP permutes the columns of A according to the inverse (or transpose) permutation matrix P−1 = PT . ~ 1.4.16. (a) If P has a 1 in position (¼(j), j), then it moves row j of A to row ¼(j) of P A, which is enough to establish the correspondence. (b) (i) 0 B@ 0 1 0 1 0 0 0 0 1 1 CA , (ii) 0 BBB@ 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 1 CCCA , (iii) 0 BBB@ 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 CCCA , (iv) 0 BBBBB@ 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 CCCCCA . Cases (i) and (ii) are elementary matrices. (c) (i) 1 2 3 2 3 1 ! , (ii) 1 2 3 4 3 4 1 2 ! , (iii) 1 2 3 4 4 1 2 3 ! , (iv) 1 2 3 4 5 2 5 3 1 4 ! . } 1.4.17. The first row of an n×n permutation matrix can have the 1 in any of the n positions, so there are n possibilities for the first row. Once the first row is set, the second row can have its 1 anywhere except in the column under the 1 in the first row, and so there are n − 1 possibilities. The 1 in the third row can be in any of the n − 2 positions not under either of the previous two 1’s. And so on, leading to a total of n(n − 1)(n − 2) · · · 2 · 1 = n! possible permutation matrices. 1.4.18. Let ri, rj denote the rows of the matrix in question. After the first elementary row op- eration, the rows are ri and rj + ri. After the second, they are ri − (rj + ri) = −rj and rj + ri. After the third operation, we are left with −rj and rj + ri + (−rj) = ri. 1.4.19. (a) 0 1 1 0 ! 0 1 2 −1 ! = 1 0 0 1 ! 2 −1 0 1 ! , x = 0 @ 5 2 3 1 A; 14 (b) 0 B@ 0 1 0 0 0 1 1 0 0 1 CA 0 B@ 0 0 −4 1 2 3 0 1 7 1 CA = 0 B@ 1 0 0 0 1 0 0 0 1 1 CA 0 B@ 1 2 3 0 1 7 0 0 −4 1 CA , x = 0 BBB@ 5 434 −14 1 CCCA ; (c) 0 B@ 0 0 1 1 0 0 0 1 0 1 CA 0 B@ 0 1 −3 0 2 3 1 0 2 1 CA = 0 B@ 1 0 0 0 1 0 0 2 1 1 CA 0 B@ 1 0 2 0 1 −3 0 0 9 1 CA , x = 0 B@ −1 1 0 1 CA ; (d) 0 BBB@ 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 1 CCCA 0 BBB@ 1 2 −1 0 3 6 2 −1 1 1 −7 2 1 −1 2 1 1 CCCA = 0 BBB@ 1 0 0 0 1 1 0 0 3 0 1 0 1 3 21 5 1 1 CCCA 0 BBB@ 1 2 −1 0 0 −1 −6 2 0 0 5 −1 0 0 0 −4 5 1 CCCA , x = 0 BBB@ 22 −13 −5 −22 1 CCCA ; (e) 0 BBB@ 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 1 1 CCCA 0 BBB@ 0 1 0 0 2 3 1 0 1 4 −1 2 7 −1 2 3 1 CCCA = 0 BBB@ 1 0 0 0 0 1 0 0 2 −5 1 0 7 −29 3 1 1 CCCA 0 BBB@ 1 4 −1 2 0 1 0 0 0 0 3 −4 0 0 0 1 1 CCCA , x = 0 BBB@ −1 −1 1 3 1 CCCA ; (f ) 0 BBBBB@ 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 1 CCCCCA 0 BBBBB@ 0 0 2 3 4 0 1 −7 2 3 1 4 1 1 1 0 0 1 0 2 0 0 1 7 3 1 CCCCCA = 0 BBBBB@ 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 2 1 0 0 0 1 7 3 1 1 CCCCCA 0 BBBBB@ 1 4 1 1 1 0 1 −7 2 3 0 0 1 0 2 0 0 0 3 0 0 0 0 0 1 1 CCCCCA , x = 0 BBBBB@ 1 0 0 −1 0 1 CCCCCA . 1.4.20. (a) 0 B@ 1 0 0 0 0 1 0 1 0 1 CA 0 B@ 4 −4 2 −3 3 1 −3 1 −2 1 CA = 0 BB@ 1 0 0 − 3 4 1 0 − 3 4 0 1 1 CCA 0 BB@ 4 −4 2 0 −2 −1 2 0 0 52 1 CCA ; solution: x1 = 5 4 , x2 = 7 4 , x3 = 3 2 . (b) 0 BBB@ 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 1 CCCA 0 BBB@ 0 1 −1 1 0 1 1 0 1 −1 1 −3 1 2 −1 1 1 CCCA = 0 BBB@ 1 0 0 0 0 1 0 0 0 1 1 0 1 3 5 2 1 1 CCCA 0 BBB@ 1 −1 1 −3 0 1 1 0 0 0 −2 1 0 0 0 3 2 1 CCCA ; solution: x = 4, y = 0, z = 1, w = 1. (c) 0 BBB@ 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 CCCA 0 BBB@ 1 −1 2 1 −1 1 −3 0 1 −1 1 −3 1 2 −1 1 1 CCCA = 0 BBB@ 1 0 0 0 1 1 0 0 1 0 1 0 −1 0 −1 2 1 1 CCCA 0 BBB@ 1 −1 2 1 0 3 −3 0 0 0 2 −4 0 0 0 1 1 CCCA ; solution: x = 19 3 , y = −5 3 , z = −3, w = −2. } 1.4.21. (a) They are all of the form P A = LU, where P is a permutation matrix. In the first case, we interchange rows 1 and 2, in the second case, we interchange rows 1 and 3, in the third case, we interchange rows 1 and 3 first and then interchange rows 2 and 3. (b) Same solution x = 1, y = 1, z = −2 in all cases. Each is done by a sequence of elemen- tary row operations, which do not change the solution. 1.4.22. There are fou0r in all: B@ 0 1 0 1 0 0 0 0 1 1 CA 0 B@ 0 1 2 1 0 −1 1 1 3 1 CA = 0 B@ 1 0 0 0 1 0 1 1 1 1 CA 0 B@ 1 0 −1 0 1 2 0 0 2 1 CA , 0 B@ 0 1 0 0 0 1 1 0 0 1 CA 0 B@ 0 1 2 1 0 −1 1 1 3 1 CA = 0 B@ 1 0 0 1 1 0 0 1 1 1 CA 0 B@ 1 0 −1 0 1 4 0 0 −2 1 CA , 0 B@ 0 0 1 0 1 0 1 0 0 1 CA 0 B@ 0 1 2 1 0 −1 1 1 3 1 CA = 0 B@ 1 0 0 1 1 0 0 −

Mostrar más Leer menos
Institución
Grado











Ups! No podemos cargar tu documento ahora. Inténtalo de nuevo o contacta con soporte.

Libro relacionado

Escuela, estudio y materia

Institución
Grado

Información del documento

Subido en
7 de noviembre de 2021
Número de páginas
358
Escrito en
2021/2022
Tipo
Examen
Contiene
Preguntas y respuestas

Temas

Vista previa del contenido

, Applied Linear Algebra
Instructor’s Solutions Manual
by Peter J. Olver and Chehrzad Shakiban

Table of Contents

Chapter Page
1. Linear Algebraic Systems . . . . . . . . . . . . . . . . . . . . . . . .. 1
2. Vector Spaces and Bases . . . . . . . . . . . . . . . . . . . . . . . . . 46
3. Inner Products and Norms . . . . . . . . . . . . . . . . . . . . . . . 78
4. Minimization and Least Squares Approximation . . . . . . . 114
5. Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6. Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
7. Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
8. Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
9. Linear Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . 262
10. Iteration of Linear Systems . . . . . . . . . . . . . . . . . . . . . . 306
11. Boundary Value Problems in One Dimension . . . . . . . . . 346




1

, Solutions — Chapter 1


1.1.1.
(a) Reduce the system to x − y = 7, 3 y = −4; then use Back Substitution to solve for
x = 17
3 ,y = − 3.
4

(b) Reduce the system to 6 u + v = 5, − 25 v = 52 ; then use Back Substitution to solve for
u = 1, v = −1.
(c) Reduce the system to p + q − r = 0, −3 q + 5 r = 3, − r = 6; then solve for p = 5, q =
−11, r = −6.
(d) Reduce the system to 2 u − v + 2 w = 2, − 23 v + 4 w = 2, − w = 0; then solve for
u = 13 , v = − 34 , w = 0.
(e) Reduce the system to 5 x1 + 3 x2 − x3 = 9, 15 x2 − 25 x3 = 25 , 2 x3 = −2; then solve for
x1 = 4, x2 = −4, x3 = −1.
(f ) Reduce the system to x + z − 2 w = − 3, − y + 3 w = 1, − 4 z − 16 w = − 4, 6 w = 6; then
solve for x = 2, y = 2, z = −3, w = 1.
(g) Reduce the system to 3 x1 + x2 = 1, 38 x2 + x3 = 32 , 21 3 55 5
8 x3 + x4 = 4 , 21 x4 = 7 ; then
3 2 2 3
solve for x1 = 11 , x2 = 11 , x3 = 11 , x4 = 11 .
1.1.2. Plugging in the given values of x, y and z gives a+2 b− c = 3, a−2− c = 1, 1+2 b+c = 2.
Solving this system yields a = 4, b = 0, and c = 1.
♥ 1.1.3.
(a) With Forward Substitution, we just start with the top equation and work down. Thus
2 x = −6 so x = −3. Plugging this into the second equation gives 12 + 3y = 3, and so
y = −3. Plugging the values of x and y in the third equation yields −3 + 4(−3) − z = 7,
and so z = −22.
(b) We will get a diagonal system with the same solution.
(c) Start with the last equation and, assuming the coefficient of the last variable is 6= 0, use
the operation to eliminate the last variable in all the preceding equations. Then, again
assuming the coefficient of the next-to-last variable is non-zero, eliminate it from all but
the last two equations, and so on.
(d) For the systems in Exercise 1.1.1, the method works in all cases except (c) and (f ).
Solving the reduced system by Forward Substitution reproduces the same solution (as
it must):
(a) The system reduces to 32 x = 17 2 , x + 2 y = 3.
(b) The reduced system is 15 2 u = 15
2 , 3 u − 2 v = 5.
(c) The method doesn’t work since r doesn’t appear in the last equation.
(d) Reduce the system to 23 u = 12 , 72 u − v = 52 , 3 u − 2 w = −1.
(e) Reduce the system to 32 x1 = 83 , 4 x1 + 3 x2 = 4, x1 + x2 + x3 = −1.
(f ) Doesn’t work since, after the first reduction, z doesn’t occur in the next to last
equation.
(g) Reduce the system to 55 5 21 3 8 2
21 x1 = 7 , x2 + 8 x3 = 4 , x3 + 3 x4 = 3 , x3 + 3 x4 = 1.



0 1
0
1.2.1. (a) 3 × 4, (b) 7, (c) 6, (d) ( −2 0 1 2 ), (e) B C
@ 2 A.
−6

1

, 0 1 0 1
2 3 1 ! 1 2 3 4
B 1 2 3
1.2.2. (a) 5 6C
@4 A, (b) , (c) B
@4 5 6 7CA, (d) ( 1 2 3 4 ),
1 4 5
7
0 1
8 9 7 8 9 3
1
(e) B C
@ 2 A, (f ) ( 1 ).
3
1.2.3. x = − 31 , y = 34 , z = − 31 , w = 23 .
1.2.4. ! ! !
−1 1 x 7
(a) A= , x= , b= ;
2 1 y 3
! ! !
1 6 u 5
(b) A= , x= , b= ;
−2 3 v 5
0 1 0 1 0 1
1 1 −1 p 0
(c) A=B@ 2 −1 3C B C B C
A, x = @ q A, b = @ 3 A;
0
−1 −1 0
1 0 1
r 0
6 1
2 1 2 u 3
(d) A=B@ −1 3 3C B C B
A, x = @ v A, b = @ −2 A;
C

0
4 −3 1 0 0
w1 0
71
5 3 −1 x1 9
(e) A=B C B C
@ 3 2 −1 A, x = @ x2 A, b = @ 5 A;
B C

0
1 1 2 1
x3 0 1 −1 0 1
1 0 1 −2 x −3
B C B C B C
2 −1 2 −1 C B yC B 3C
(f ) A=BB
@ 0 −6 −4
C, x = B C, b = B C;
2A @ zA @ 2A

0
1 3 2 1 −1 0 1
w 0 1
1
3 1 0 0 x1 1
B
1 3 1 0C B
Bx C
C B C
B1C
(g) A=BB
@0 1 3 1A
C
C, x = B 2 C, b = B C.
@ x3 A @1A
0 0 1 3 x4 1
1.2.5.
(a) x − y = −1, 2 x + 3 y = −3. The solution is x = − 65 , y = − 51 .
(b) u + w = −1, u + v = −1, v + w = 2. The solution is u = −2, v = 1, w = 1.
(c) 3 x1 − x3 = 1, −2 x1 − x2 = 0, x1 + x2 − 3 x3 = 1.
The solution is x1 = 51 , x2 = − 25 , x3 = − 52 .
(d) x + y − z − w = 0, −x + z + 2 w = 4, x − y + z = 1, 2 y − z + w = 5.
The solution is x = 2, y = 1, z = 0, w = 3.



1.2.6. 0 1 0 1
1 0 0 0 0 0 0 0
B
B0 1 0 0CC
B
B0 0 0 0CC
(a) I = B
@0
C, O=B C.
0 1 0A @0 0 0 0A
0 0 0 1 0 0 0 0
(b) I + O = I, I O = O I = O. No, it does not.
!
3 0 6
1.2.7. (a) undefined, (b) undefined, (c) , (d) undefined, (e) undefined,
−1 2 4
0 1 0 1
1 11 9 9 −2 14
B
(f ) @3 −12 −12 C B
A, (g) undefined, (h) @ −8 6 −17 C A, (i) undefined.
7 8 8 12 −3 28


2
$14.49
Accede al documento completo:

100% de satisfacción garantizada
Inmediatamente disponible después del pago
Tanto en línea como en PDF
No estas atado a nada

Conoce al vendedor

Seller avatar
Los indicadores de reputación están sujetos a la cantidad de artículos vendidos por una tarifa y las reseñas que ha recibido por esos documentos. Hay tres niveles: Bronce, Plata y Oro. Cuanto mayor reputación, más podrás confiar en la calidad del trabajo del vendedor.
Expert001 Chamberlain School Of Nursing
Seguir Necesitas iniciar sesión para seguir a otros usuarios o asignaturas
Vendido
808
Miembro desde
4 año
Número de seguidores
566
Documentos
1188
Última venta
3 horas hace
Expert001

High quality, well written Test Banks, Guides, Solution Manuals and Exams to enhance your learning potential and take your grades to new heights. Kindly leave a review and suggestions. We do take pride in our high-quality services and we are always ready to support all clients.

4.2

160 reseñas

5
104
4
18
3
14
2
7
1
17

Recientemente visto por ti

Por qué los estudiantes eligen Stuvia

Creado por compañeros estudiantes, verificado por reseñas

Calidad en la que puedes confiar: escrito por estudiantes que aprobaron y evaluado por otros que han usado estos resúmenes.

¿No estás satisfecho? Elige otro documento

¡No te preocupes! Puedes elegir directamente otro documento que se ajuste mejor a lo que buscas.

Paga como quieras, empieza a estudiar al instante

Sin suscripción, sin compromisos. Paga como estés acostumbrado con tarjeta de crédito y descarga tu documento PDF inmediatamente.

Student with book image

“Comprado, descargado y aprobado. Así de fácil puede ser.”

Alisha Student

Preguntas frecuentes