100% de satisfacción garantizada Inmediatamente disponible después del pago Tanto en línea como en PDF No estas atado a nada 4.2 TrustPilot
logo-home
Examen

Exam (elaborations) TEST BANK FOR An Introduction to Optimization 4th edition By Edwin K. P. Chong, Stanislaw H. Zak (solution manual)

Puntuación
-
Vendido
-
Páginas
221
Grado
A+
Subido en
15-11-2021
Escrito en
2021/2022

Exam (elaborations) TEST BANK FOR An Introduction to Optimization 4th edition By Edwin K. P. Chong, Stanislaw H. Zak (solution manual) AN INTRODUCTION TO OPTIMIZATION SOLUTIONS MANUAL Fourth Edition Edwin K. P. Chong and Stanislaw H. Z˙ ak A JOHN WILEY & SONS, INC., PUBLICATION 1. Methods of Proof and Some Notation 1.1 A B not A not B A)B (not B))(not A) F F T T T T F T T F T T T F F T F F T T F F T T 1.2 A B not A not B A)B not (A and (not B)) F F T T T T F T T F T T T F F T F F T T F F T T 1.3 A B not (A and B) not A not B (not A) or (not B)) F F T T T T F T T T F T T F T F T T T T F F F F 1.4 A B A and B A and (not B) (A and B) or (A and (not B)) F F F F F F T F F F T F F T T T T T F T 1.5 The cards that you should turn over are 3 and A. The remaining cards are irrelevant to ascertaining the truth or falsity of the rule. The card with S is irrelevant because S is not a vowel. The card with 8 is not relevant because the rule does not say that if a card has an even number on one side, then it has a vowel on the other side. Turning over the A card directly verifies the rule, while turning over the 3 card verifies the contraposition. 2. Vector Spaces and Matrices 2.1 We show this by contradiction. Suppose n < m. Then, the number of columns of A is n. Since rankA is the maximum number of linearly independent columns of A, then rankA cannot be greater than n < m, which contradicts the assumption that rankA = m. 2.2 ): Since there exists a solution, then by Theorem 2.1, rankA = rank[A .. . b]. So, it remains to prove that rankA = n. For this, suppose that rankA < n (note that it is impossible for rankA > n since A has only n columns). Hence, there exists y 2 Rn, y 6= 0, such that Ay = 0 (this is because the columns of 1 A are linearly dependent, and Ay is a linear combination of the columns of A). Let x be a solution to Ax = b. Then clearly x + y 6= x is also a solution. This contradicts the uniqueness of the solution. Hence, rankA = n. (: By Theorem 2.1, a solution exists. It remains to prove that it is unique. For this, let x and y be solutions, i.e., Ax = b and Ay = b. Subtracting, we get A(x − y) = 0. Since rankA = n and A has n columns, then x − y = 0 and hence x = y, which shows that the solution is unique. 2.3 Consider the vectors ¯ai = [1, a> i ]> 2 Rn+1, i = 1, . . . , k. Since k  n + 2, then the vectors ¯a1, . . . , ¯ak must be linearly independent in Rn+1. Hence, there exist 1, . . . k, not all zero, such that Xk i=1 iai = 0. The first component of the above vector equation is Pk i=1 i = 0, while the last n components have the form Pk i=1 iai = 0, completing the proof. 2.4 a. We first postmultiply M by the matrix " Ik O −Mm−k,k Im−k # to obtain " Mm−k,k Im−k Mk,k O # " Ik O −Mm−k,k Im−k # = " O Im−k Mk,k O # . Note that the determinant of the postmultiplying matrix is 1. Next we postmultiply the resulting product by " O Ik Im−k O # to obtain " O Im−k Mk,k O # " O Ik Im−k O # = " Ik O O Mk,k # . Notice that detM = det " Ik O O Mk,k #! det " O Ik Im−k O #! , where det " O Ik Im−k O #! = ±1. The above easily follows from the fact that the determinant changes its sign if we interchange columns, as discussed in Section 2.2. Moreover, det " Ik O O Mk,k #! = det(Ik) det(Mk,k) = det(Mk,k). Hence, detM = ±detMk,k. b. We can see this on the following examples. We assume, without loss of generality that Mm−k,k = O and let Mk,k = 2. Thus k = 1. First consider the case when m = 2. Then we have M = " O Im−k Mk,k O # = " 0 1 2 0 # . 2 Thus, detM = −2 = det (−Mk,k) . Next consider the case when m = 3. Then det " O Im−k Mk,k O # = det 2 0 ... 1 0 0 ... 0 1 · · · · · · · · · · · · 2 ... 0 0 3 = 2 6= det (−Mk,k) . Therefore, in general, detM 6= det (−Mk,k) However, when k = m/2, that is, when all sub-matrices are square and of the same dimension, then it is true that detM = det (−Mk,k) . See [121]. 2.5 Let M = " A B C D # and suppose that each block is k × k. John R. Silvester [121] showed that if at least one of the blocks is equal to O (zero matrix), then the desired formula holds. Indeed, if a row or column block is zero, then the determinant is equal to zero as follows from the determinant’s properties discussed Section 2.2. That is, if A = B = O, or A = C = O, and so on, then obviously detM = 0. This includes the case when any three or all four block matrices are zero matrices. If B = O or C = O then detM = det " A B C D # = det (AD) . The only case left to analyze is when A = O or D = O. We will show that in either case, detM = det (−BC) . Without loss of generality suppose that D = O. Following arguments of John R. Silvester [121], we premultiply M by the product of three matrices whose determinants are unity: " Ik −Ik O Ik # " Ik O Ik Ik # " Ik −Ik O Ik # " A B C O # = " −C O A B # . Hence, det " A B C O # = " −C O A B # = det (−C) detB = det (−Ik) detC detB. Thus we have det " A B C O # = det (−BC) = det (−CB) . 3 2.6 We represent the given system of equations in the form Ax = b, where A = " 1 1 2 1 1 −2 0 −1 # , x = 2 6664 x1 x2 x3 x4 3 7775 , and b = " 1 −2 # . Using elementary row operations yields A = " 1 1 2 1 1 −2 0 −1 # ! " 1 1 2 1 0 −3 −2 −2 # , and [A, b] = " 1 1 2 1 1 1 −2 0 −1 −2 # ! " 1 1 2 1 1 0 −3 −2 −2 −3 # , from which rankA = 2 and rank[A, b] = 2. Therefore, by Theorem 2.1, the system has a solution. We next represent the system of equations as " 1 1 1 −2 # " x1 x2 # = " 1 − 2x3 − x4 −2 + x4 # Assigning arbitrary values to x3 and x4 (x3 = d3, x4 = d4), we get " x1 x2 # = " 1 1 1 −2 #−1 " 1 − 2x3 − x4 −2 + x4 # = − 1 3 " −2 −1 −1 1 # " 1 − 2x3 − x4 −2 + x4 # = " −4 3d3 − 1 3d4 1 − 2 3d3 − 2 3d4 # . Therefore, a general solution is 2 6664 x1 x2 x3 x4 3 7775 = 2 6664 −4 3d3 − 1 3d4 1 − 2 3d3 − 2 3d4 d3 d4 3 7775 = 2 6664 −4 3 −2 3 1 0 3 7775 d3 + 2 6664 −1 3 −2 3 0 1 3 7775 d4 + 2 6664 0 1 0 0 3 7775 , where d3 and d4 are arbitrary values. 2.7 1. Apply the definition of | − a|: | − a| = 8>< >: −a if −a > 0 0 if −a = 0 −(−a) if −a < 0 = 8>< >: −a if a < 0 0 if a = 0 a if a > 0 = |a|. 2. If a  0, then |a| = a. If a < 0, then |a| = −a > 0 > a. Hence |a|  a. On the other hand, | − a|  −a (by the above). Hence, a  −| − a| = −|a| (by property 1). 4 3. We have four cases to consider. First, if a, b  0, then a + b  0. Hence, |a + b| = a + b = |a| + |b|. Second, if a, b  0, then a + b  0. Hence |a + b| = −(a + b) = −a − b = |a| + |b|. Third, if a  0 and b  0, then we have two further subcases: 1. If a + b  0, then |a + b| = a + b  |a| + |b|. 2. If a + b  0, then |a + b| = −a − b  |a| + |b|. The fourth case, a  0 and b  0, is identical to the third case, with a and b interchanged. 4. We first show |a − b|  |a| + |b|. We have |a − b| = |a + (−b)|  |a| + | − b| by property 3 = |a| + |b| by property 1. To show ||a|−|b||  |a−b|, we note that |a| = |a−b+b|  |a−b|+|b|, which implies |a|−|b|  |a−b|. On the other hand, from the above we have |b|−|a|  |b−a| = |a−b| by property 1. Therefore, ||a|−|b||  |a−b|. 5. We have four cases. First, if a, b  0, we have ab  0 and hence |ab| = ab = |a||b|. Second, if a, b  0, we have ab  0 and hence |ab| = ab = (−a)(−b) = |a||b|. Third, if a  0, b  0, we have ab  0 and hence |ab| = −ab = a(−b) = |a||b|. The fourth case, a  0 and b  0, is identical to the third case, with a and b interchanged. 6. We have |a + b|  |a| + |b| by property 3  c + d. 7. ): By property 2, −a  |a| and a  |a. Therefore, |a| < b implies −a  |a| < b and a  |a| < b. (: If a  0, then |a| = a < b. If a < 0, then |a| = −a < b. For the case when “<” is replaced by “”, we simply repeat the above proof with “<” replaced by “”. 8. This is simply the negation of property 7 (apply DeMorgan’s Law). 2.8 Observe that we can represent hx, yi2 as hx, yi2 = x> " 2 3 3 5 # y = (Qx)>(Qy) = x>Q2y, where Q = " 1 1 1 2 # . Note that the matrix Q = Q> is nonsingular. 1. Now, hx, xi2 = (Qx)>(Qx) = kQxk2  0, and hx, xi2 = 0 , kQxk2 = 0 , Qx = 0 , x = 0 since Q is nonsingular. 2. hx, yi2 = (Qx)>(Qy) = (Qy)>(Qx) = hy, xi2. 3. We have hx + y, zi2 = (x + y)>Q2z = x>Q2z + y>Q2z = hx, zi2 + hy, zi2. 5 4. hrx, yi2 = (rx)>Q2y = rx>Q2y = rhx, yi2. 2.9 We have kxk = k(x−y)+yk  kx−yk+kyk by the Triangle Inequality. Hence, kxk−kyk  kx−yk. On the other hand, from the above we have kyk − kxk  ky − xk = kx − yk. Combining the two inequalities, we obtain |kxk − kyk|  kx − yk. 2.10 Let  > 0 be given. Set  = . Hence, if kx − yk < , then by Exercise 2.9, |kxk − kyk|  kx − yk <  = . 3. Transformations 3.1 Let v be the vector such that x are the coordinates of v with respect to {e1, e2, . . . , en}, and x0 are the coordinates of v with respect to {e0 1, e0 2, . . . , e0 n}. Then, v = x1e1 + · · · + xnen = [e1, . . . , en]x, and v = x0 1e0 1 + · · · + x0 ne0 n = [e0 1, . . . , e0 n]x0. Hence, [e1, . . . , en]x = [e0 1, . . . , e0 n]x0 which implies x0 = [e0 1, . . . , e0 n]−1[e1, . . . , en]x = Tx. 3.2 a. We have [e0 1, e0 2, e0 3] = [e1, e2, e3] 2 64 1 2 4 3 −1 5 −4 5 3 3 75 . Therefore, T = [e0 1, e0 2, e0 3]−1[e1, e2, e3] = 2 64 1 2 4 3 −1 5 −4 5 3 3 75 −1 = 1 42 2 64 28 −14 −14 29 −19 −7 −11 13 7 3 75 . b. We have [e1, e2, e3] = [e0 1, e0 2, e0 3] 2 64 1 2 3 1 −1 0 3 4 5 3 75 . Therefore, T = 2 64 1 2 3 1 −1 0 3 4 5 3 75 . 3.3 We have [e1, e2, e3] = [e0 1, e0 2, e0 3] 2 64 2 2 3 1 −1 0 −1 2 1 3 75 . 6 Therefore, the transformation matrix from {e0 1, e0 2, e0 3} to {e1, e2, e3} is T = 2 64 2 2 3 1 −1 0 −1 2 1 3 75 , Now, consider a linear transformation L : R3 ! R3, and let A be its representation with respect to {e1, e2, e3}, and B its representation with respect to {e0 1, e0 2, e0 3}. Let y = Ax and y0 = Bx0. Then, y0 = Ty = T (Ax) = TA(T −1x0) = (TAT−1)x0. Hence, the representation of the linear transformation with respect to {e0 1, e0 2, e0 3} is B = TAT−1 = 2 64 3 −10 −8 −1 8 4 2 −13 −7 3 75 . 3.4 We have [e0 1, e0 2, e0 3, e0 4] = [e1, e2, e3, e4] 2 6664 1 1 1 1 0 1 1 1 0 0 1 1 0 0 0 1 3 7775 . Therefore, the transformation matrix from {e1, e2, e3, e4} to {e0 1, e0 2, e0 3, e0 4} is T = 2 6664 1 1 1 1 0 1 1 1 0 0 1 1 0 0 0 1 3 7775 −1 = 2 6664 1 −1 0 0 0 1 −1 0 0 0 1 −1 0 0 0 1 3 7775 . Now, consider a linear transformation L : R4 ! R4, and let A be its representation with respect to {e1, e2, e3, e4}, and B its representation with respect to {e0 1, e0 2, e0 3, e0 4}. Let y = Ax and y0 = Bx0. Then, y0 = Ty = T (Ax) = TA(T −1x0) = (TAT−1)x0. Therefore, B = TAT−1 = 2 6664 5 3 4 3 −3 −2 −1 −2 −1 0 −1 −2 1 1 1 4 3 7775 . 3.5 Let {v1, v2, v3, v4} be a set of linearly independent eigenvectors of A corresponding to the eigenvalues 1, 2, 3, and 4. Let T = [v1, v2, v3, v4]. Then, AT = A[v1, v2, v3, v4] = [Av1,Av2,Av3,Av4] = [1v1, 2v2, 3v3, 4v4] = [v1, v2, v3, v4] 2 6664 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 4 3 7775 . Hence, AT = T 2 64 1 0 0 0 2 0 0 0 3 3 75 , 7 or T −1AT = 2 64 1 0 0 0 2 0 0 0 3 3 75 . Therefore, the linear transformation has a diagonal matrix form with respect to the basis formed by a linearly independent set of eigenvectors. Because det(A) = ( − 2)( − 3)( − 1)( + 1), the eigenvalues are 1 = 2, 2 = 3, 3 = 1, and 4 = −1. From Avi = ivi, where vi 6= 0 (i = 1, 2, 3), the corresponding eigenvectors are v1 = 2 6664 0 0 1 0 3 7775 , v2 = 2 6664 0 0 1 1 3 7775 , v3 = 2 6664 0 2 −9 1 3 7775 , and v4 = 2 6664 24 −12 1 9 3 7775 . Therefore, the basis we are interested in is {v1, v2, v3} = 8>>>< >>>: 2 6664 0 0 1 1 3 7775 , 2 6664 0 0 1 1 3 7775 , 2 6664 0 2 −9 1 3 7775 , 2 6664 24 −12 1 9 3 7775 9>>>= >>>; . 3.6 Suppose v1, . . . , vn are eigenvectors of A corresponding to 1, . . . , n, respectively. Then, for each i = 1, . . . , n, we have (In − A)vi = vi − Avi = vi − ivi = (1 − i)vi which shows that 1 − 1, . . . , 1 − n are the eigenvalues of In − A. Alternatively, we may write the characteristic polynomial of In − A as In−A(1 − ) = det((1 − )In − (In − A)) = det(−[In − A]) = (−1)nA(), which shows the desired result. 3.7 Let x, y 2 V?, and , 2 R. To show that V? is a subspace, we need to show that x+ y 2 V?. For this, let v be any vector in V. Then, v>( x + y) = v>x + v>y = 0, since v>x = v>y = 0 by definition. 3.8 The null space of A is N(A) =  x 2 R3 : Ax = 0 . Using elementary row operations and back-substitution, we can solve the system of equations: 2 64 4 −2 0 2 1 −1 2 −3 1 3 75 ! 2 64 4 −2 0 0 2 −1 0 −2 1 3 75 ! 2 64 4 −2 0 0 2 −1 0 0 0 3 75 ) 4x1 − 2x2 = 0 2x2 − x3 = 0 ) x2 = 1 2x3, x1 = 1 2x2 = 1 4x3 ) x = 2 64 x1 x2 x3 3 75 = 2 64 1 41 21 3 75 x3. 8 Therefore, N(A) = 8>< >: 2 64 1 2 4 3 75 c : c 2 R 9>= >; . 3.9 Let x, y 2 R(A), and , 2 R. Then, there exists v,u such that x = Av and y = Au. Thus, x + y = Av + Au = A( v + u). Hence, x + y 2 R(A), which shows that R(A) is a subspace. Let x, y 2 N(A), and , 2 R. Then, Ax = 0 and Ay = 0. Thus, A( x + y) = Ax + Ay = 0. Hence, x + y 2 N(A), which shows that N(A) is a subspace. 3.10 Let v 2 R(B), i.e., v = Bx for some x. Consider the matrix [A v]. Then, N(A>) = N([A v]>), since if u 2 N(A>), then u 2 N(B>) by assumption, and hence u>v = u>Bx = x>B>u = 0. Now, dimR(A) + dimN(A>) = m and dimR([A v]) + dimN([A v]>) = m. Since dimN(A>) = dimN([A v]>), then we have dimR(A) = dimR([A v]). Hence, v is a linear combination of the columns of A, i.e., v 2 R(A), which completes the proof. 3.11 We first show V  (V ?)?. Let v 2 V , and u any element of V ?. Then u>v = v>u = 0. Therefore, v 2 (V ?)?. We now show (V ?)?  V . Let {a1, . . . , ak} be a basis for V , and {b1, . . . , bl} a basis for (V ?)?. Define A = [a1 · · · ak] and B = [b1 · · · bl], so that V = R(A) and (V ?)? = R(B). Hence, it remains to show that R(B)  R(A). Using the result of Exercise 3.10, it suffices to show that N(A>)  N(B>). So let x 2 N(A>), which implies that x 2 R(A)? = V ?, since R(A)? = N(A>). Hence, for all y, we have (By)>x = 0 = y>B>x, which implies that B>x = 0. Therefore, x 2 N(B>), which completes the proof. 3.12 Let w 2 W?, and y be any element of V. Since V  W, then y 2 W. Therefore, by definition of w, we have w>y = 0. Therefore, w 2 V?. 3.13 Let r = dim V. Let v1, . . . , vr be a basis for V, and V the matrix whose ith column is vi. Then, clearly V = R(V ). Let u1, . . . ,un−r be a basis for V?, and U the matrix whose ith row is u> i . Then, V? = R(U>), and V = (V?)? = R(U>)? = N(U) (by Exercise 3.11 and Theorem 3.4). 3.14 a. Let x 2 V. Then, x = Px + (I − P)x. Note that Px 2 V, and (I − P)x 2 V?. Therefore, x = Px + (I − P)x is an orthogonal decomposition of x with respect to V. However, x = x + 0 is also an orthogonal decomposition of x with respect to V. Since the orthogonal decomposition is unique, we must have x = Px. b. Suppose P is an orthogonal projector onto V. Clearly, R(P)  V by definition. However, from part a, x = Px for all x 2 V, and hence V  R(P). Therefore, R(P) = V. 3.15 To answer the question, we have to represent the quadratic form with a symmetric matrix as x> 1 2 " 1 −8 1 1 # + 1 2 " 1 1 −8 1 #! x = x> " 1 −7/2 −7/2 1 # x. 9 The leading principal minors are 1 = 1 and 2 = −45/4. Therefore, the quadratic form is

Mostrar más Leer menos











Ups! No podemos cargar tu documento ahora. Inténtalo de nuevo o contacta con soporte.

Información del documento

Subido en
15 de noviembre de 2021
Número de páginas
221
Escrito en
2021/2022
Tipo
Examen
Contiene
Preguntas y respuestas

Temas

Vista previa del contenido

,AN INTRODUCTION TO
OPTIMIZATION


SOLUTIONS MANUAL




Fourth Edition




Edwin K. P. Chong and Stanislaw H. Żak




A JOHN WILEY & SONS, INC., PUBLICATION

,1. Methods of Proof and Some Notation
1.1

A B not A not B A⇒B (not B)⇒(not A)
F F T T T T
F T T F T T
T F F T F F
T T F F T T

1.2

A B not A not B A⇒B not (A and (not B))
F F T T T T
F T T F T T
T F F T F F
T T F F T T

1.3

A B not (A and B) not A not B (not A) or (not B))
F F T T T T
F T T T F T
T F T F T T
T T F F F F

1.4

A B A and B A and (not B) (A and B) or (A and (not B))
F F F F F
F T F F F
T F F T T
T T T F T

1.5
The cards that you should turn over are 3 and A. The remaining cards are irrelevant to ascertaining the
truth or falsity of the rule. The card with S is irrelevant because S is not a vowel. The card with 8 is not
relevant because the rule does not say that if a card has an even number on one side, then it has a vowel on
the other side.
Turning over the A card directly verifies the rule, while turning over the 3 card verifies the contraposition.



2. Vector Spaces and Matrices
2.1
We show this by contradiction. Suppose n < m. Then, the number of columns of A is n. Since rank A is
the maximum number of linearly independent columns of A, then rank A cannot be greater than n < m,
which contradicts the assumption that rank A = m.
2.2
.
⇒: Since there exists a solution, then by Theorem 2.1, rank A = rank[A..b]. So, it remains to prove that
rank A = n. For this, suppose that rank A < n (note that it is impossible for rank A > n since A has
only n columns). Hence, there exists y ∈ Rn , y 6= 0, such that Ay = 0 (this is because the columns of
1

, A are linearly dependent, and Ay is a linear combination of the columns of A). Let x be a solution to
Ax = b. Then clearly x + y 6= x is also a solution. This contradicts the uniqueness of the solution. Hence,
rank A = n.
⇐: By Theorem 2.1, a solution exists. It remains to prove that it is unique. For this, let x and y be
solutions, i.e., Ax = b and Ay = b. Subtracting, we get A(x − y) = 0. Since rank A = n and A has n
columns, then x − y = 0 and hence x = y, which shows that the solution is unique.
2.3
Consider the vectors āi = [1, a> >
i ] ∈ R
n+1
, i = 1, . . . , k. Since k ≥ n + 2, then the vectors ā1 , . . . , āk must
n+1
be linearly independent in R . Hence, there exist α1 , . . . αk , not all zero, such that
k
X
αi ai = 0.
i=1
Pk
The first component of the above vector equation is i=1 αi = 0, while the last n components have the form
Pk
i=1 αi ai = 0, completing the proof.
2.4
a. We first postmultiply M by the matrix
" #
Ik O
−M m−k,k I m−k

to obtain " #" # " #
M m−k,k I m−k Ik O O I m−k
= .
M k,k O −M m−k,k I m−k M k,k O
Note that the determinant of the postmultiplying matrix is 1. Next we postmultiply the resulting product
by " #
O Ik
I m−k O
to obtain " #" # " #
O I m−k O Ik Ik O
= .
M k,k O I m−k O O M k,k
Notice that " #! " #!
Ik O O Ik
det M = det det ,
O M k,k I m−k O
where " #!
O Ik
det = ±1.
I m−k O
The above easily follows from the fact that the determinant changes its sign if we interchange columns, as
discussed in Section 2.2. Moreover,
" #!
Ik O
det = det(I k ) det(M k,k ) = det(M k,k ).
O M k,k

Hence,
det M = ± det M k,k .

b. We can see this on the following examples. We assume, without loss of generality that M m−k,k = O and
let M k,k = 2. Thus k = 1. First consider the case when m = 2. Then we have
" # " #
O I m−k 0 1
M= = .
M k,k O 2 0
2
$13.99
Accede al documento completo:

100% de satisfacción garantizada
Inmediatamente disponible después del pago
Tanto en línea como en PDF
No estas atado a nada

Conoce al vendedor

Seller avatar
Los indicadores de reputación están sujetos a la cantidad de artículos vendidos por una tarifa y las reseñas que ha recibido por esos documentos. Hay tres niveles: Bronce, Plata y Oro. Cuanto mayor reputación, más podrás confiar en la calidad del trabajo del vendedor.
Expert001 Chamberlain School Of Nursing
Ver perfil
Seguir Necesitas iniciar sesión para seguir a otros usuarios o asignaturas
Vendido
797
Miembro desde
4 año
Número de seguidores
566
Documentos
1190
Última venta
1 semana hace
Expert001

High quality, well written Test Banks, Guides, Solution Manuals and Exams to enhance your learning potential and take your grades to new heights. Kindly leave a review and suggestions. We do take pride in our high-quality services and we are always ready to support all clients.

4.2

159 reseñas

5
104
4
18
3
14
2
7
1
16

Recientemente visto por ti

Por qué los estudiantes eligen Stuvia

Creado por compañeros estudiantes, verificado por reseñas

Calidad en la que puedes confiar: escrito por estudiantes que aprobaron y evaluado por otros que han usado estos resúmenes.

¿No estás satisfecho? Elige otro documento

¡No te preocupes! Puedes elegir directamente otro documento que se ajuste mejor a lo que buscas.

Paga como quieras, empieza a estudiar al instante

Sin suscripción, sin compromisos. Paga como estés acostumbrado con tarjeta de crédito y descarga tu documento PDF inmediatamente.

Student with book image

“Comprado, descargado y aprobado. Así de fácil puede ser.”

Alisha Student

Preguntas frecuentes