100% tevredenheidsgarantie Direct beschikbaar na je betaling Lees online óf als PDF Geen vaste maandelijkse kosten 4,6 TrustPilot
logo-home
Tentamen (uitwerkingen)

Exam (elaborations) TEST BANK FOR Manifolds, Tensor and Forms An Intro

Beoordeling
-
Verkocht
-
Pagina's
169
Cijfer
A+
Geüpload op
11-02-2022
Geschreven in
2021/2022

Exam (elaborations) TEST BANK FOR Manifolds, Tensor and Forms An Intro 6 Let v1, v2 ∈ ker T . Then T (av1 + bv2) = aT v1 + bT v2 = 0, so ker T is closed under linear combinations. Moreover ker T contains the zero vector of V. All the other vector space properties are easily seen to follow, so ker T is a subspace of V. Similarly, let w1, w2 ∈ im T and consider aw1 + bw2. There exist v1, v2 ∈ V such that T v1 = w1 and T v2 = w2, so T (av1 + bv2) = aT v1 + bT v2 = aw1 + bw2, which shows that im T is closed under linear combinations. Moreover, im T contains the zero vector, so im T is a subspace of W. 1.7 For any two vectors v1 and v2 we have T v1 = T v2 ⇒ T (v1 − v2) = 0 ⇒ v1 − v2 = 0 ⇒ v1 = v2. Assume the kernel of T consists only of the zero vector. Then for any two vectors v1 and v2, T (v1 − v2) = 0 implies v1 − v2 = 0, which is equivalent to saying that T v1 = T v2 implies v1 = v2, namely that T is injective. The converse follows similarly. 1.8 Let V and W be two vector spaces of the same dimension, and choose a basis {ei} for V and a basis { fi} for W. Let T : V → W be the map that sends ei to fi , extended by linearity. Then the claim is that T is an isomorphism. Let v =  i ai ei be a vector in V. If v ∈ ker T , then 0 = T v =  i aiT ei =  i ai fi . By linear independence, all the ai’s vanish, which means that the kernel of T consists only of the zero vector, and hence by Exercise 1.7, T is injective. Also, if w =  i ai fi , then w =  i aiT ei = T  i ai ei , which shows that T is also surjective. 1.9 a. Let v ∈ V and define w := π(v) and u := (1 − π )(v). Then π(u) = (π − π2)(v) = 0, so v = w + u with w ∈ im π and u ∈ ker π. Now Linear algebra 3 suppose x ∈ ker π ∩ im π. Then there is a y ∈ V such that x = π(y). But then 0 = π(x) = π2(y) = π(y) = x. b. Let { fi} be a basis for W, and complete it to a basis of V by adding a linearly independent set of vectors {gj}. Let U be the subspace of V spanned by the gi’s. With these choices, any vector v ∈ V can be written uniquely as v = w + u, where w ∈ W and u ∈ U. Define a linear map π : V → V by π(v) = w. Obviously π(w) = w, so π2 = π. 1.10 Clearly, T 0 = 0, so T −10 = 0. Let T v1 = v 1 and T v2 = v 2. Then aT −1 v 1 + bT −1 v 2 = av1 + bv2 = (T −1T )(av1 + bv2) = T −1 (av 1 + bv 1), which shows that T −1 is linear. 1.11 The identity map I : V → V is clearly an automorphism. If S ∈ Aut V then S−1 S = SS−1 = I. Finally, if S, T ∈ Aut V, then ST is invertible, with inverse (ST )−1 = T −1 S−1. (Check.) This implies that ST ∈ Aut V. (Associativity is automatic.) 1.12 By exactness, the kernel of ϕ1 is the image of ϕ0. But the image of ϕ0 consists only of the zero vector (as its domain consists only of the zero vector). Hence the kernel of ϕ1 is trivial, so by Exercise 1.7, ϕ1 must be injective. Again by exactness, the kernel of ϕ3 is the image of ϕ2. But ϕ3 maps everything to zero, so V3 = ker ϕ1, and hence V3 = im V2, which says that ϕ2 is surjective. The converse follows by reversing the preceding steps. As for the last assertion, ϕ is both injective and surjective, so it is an isomorphism. 1.13 If T is injective then ker T = 0, so by the rank/nullity theorem rk T = dim V = dim W, which shows that T is surjective as well. 1.14 The rank of a linear map is the dimension of its image. There is no way that the image of ST can be larger than that of either S or T individually, because the dimension of the image of a map cannot exceed the dimension of its domain. 1.15 If v ∈ [v] then v = v + u for some u ∈ U. By linearity ϕ(v ) = ϕ(v) + w for some w ∈ W, so [ϕ(v )]=[ϕ(v) + w]=[ϕ(v)]. 1.16 Pick a basis {ei} for V. Then,  i (ST )i j ei = (ST )e j = S(  k Tkj ek ) =  k Tkj Sek =  ik Tkj Sik ei . Hence (ST )i j =  k SikTkj = (ST)i j, which shows that ST → ST. 4 Linear algebra 1.17 The easiest way to see this is just to observe that the identity automorphism I is represented by the identity matrix I (in any basis). Suppose T −1 is represented by U in some basis. Then by the results of Exercise 1.16, T T −1 → T U. But T T −1 = I, so T U = I, which shows that U = T −1 . 1.18 Choose a basis {ei} for V. Then by definition, T e j =  i Ti j ei . It follows that T e j is represented by the jth column of T, so the maximum number of linearly dependent vectors in the image of T is precisely the maximum number of linearly independent columns of T. 1.19 Suppose  i ci θi = 0. By linearity of the dual pairing, 0 =  e j,  i ci θi  =  i ci  e j, θi  =  i ci δi j = c j, so the θ j’s are linearly independent. Now let f ∈ V∗. Define f (e j) =: aj and introduce a linear functional g :=  i ai θi . Then g(e j) =  g, e j  =  i ai δi j = aj, so f = g (two linear functionals that agree on a basis agree everywhere). Hence the θ j’s span. 1.20 Suppose f (v) = 0 for all v. Let f =  i fi θi and v = e j . Then f (v) = f (e j) = f j = 0. This is true for all j, so f = 0. The other proof is similar. 1.21 Let w ∈ W and θ1, θ2 ∈ Ann W. Then (aθ1 + bθ2)(w) = aθ1(w) + bθ2(w) = 0, so Ann W is closed under linear combinations. Moreover, the zero functional (which sends every vector to zero) is clearly in Ann W, so Ann W is a subspace of V∗. Conversely, let U∗ ⊆ V∗ be a subspace of V∗, and define W := {v ∈ V : f (v) = 0, for all f ∈ U∗ }. If f ∈ U∗ then f (v) = 0 for all v ∈ W, so f ∈ Ann W. It therefore suffices to prove that dimU∗ = dim Ann W. Let { fi} be a basis for U∗, and let {ei} be its dual basis, satisfying fi(e j) = δi j . Obviously, ei ∈ W. Thus dim W = dim V − dimU∗. On the other hand, let {wi} be a basis for W and complete Linear algebra 5 it to a basis for V: {w1,...,wdim W , edim W+1,..., edim V }. Let {ui} be a basis for Ann W. Then ui(e j) = 0, else e j ∈ W. So dim Ann W = dim V −dim W. 1.22 a. The map is well defined, because if [v ]=[v] then v = v + w for some w ∈ W, so ϕ( f )([v ]) = f (v ) = f (v + w) = f (v) + f (w) = f (v) = ϕ( f )([v]). Moreover, if ϕ( f ) = ϕ(g) then for any v ∈ V, 0 = ϕ( f − g)([v]) = ( f −g)(v), so f = g. But the proof of Exercise 1.21 shows that dim Ann W = dim(V/W) = dim(V/W)∗, so ϕ is an isomorphism. b. Suppose [g]=[ f ] in V∗/ Ann W. Then g = f + h for some h ∈ Ann W. So π∗([g])(v) = g(π(v)) = f (π(v)) + h(π(v)) = f (π(v)) = π∗([ f ])(v). Moreover, if π∗([ f ]) = π∗([g]) then f (π(v)) = g(π(v)) or ( f − g)(π(v)) = 0, so f = g when restricted to W. Dimension counting shows that π∗ is an isomorphism. 1.23 Let g be the standard inner product on Cn and let u = (u1,..., un), v = (v1,...,vn) and w = (w1,...,wn). Then g(u, av + bw) =  i ui(avi + bwi) = a  i uivi + b  i uiwi = ag(u, v) + bg(u, w). Also, g(v, u) =  i viui =  i uivi = g(u, v). Assume g(u, v) = 0 for all v. Let v run through all the vectors v(i) = (0,..., 1,..., 0), where the ‘1’ is in the ith place. Plugging into the definition of g gives ui = 0 for all i, so u = 0. Thus g is indeed an inner product. The same proof works equally well for the Euclidean and Lorentzian inner products. Again consider the standard inner product on Cn. Then g(u, u) =  i uiui =  i |ui| 2 ≥ 0, because the modulus squared of a complex number is always nonnegative, so g is nonnegative definite. Moreover, the only way we could have g(u, u) = 0 is if each ui were zero, in which case we would have u = 0. Thus g is positive definite. The same proof applies in the Euclidean case, but fails in the Lorentzian case because then 6 Linear algebra g(u, u) = −u2 0 +n−1 i=1 u2 i , and it could happen that g(u, u) = 0 but u = 0. (For example, let u = (1, 1, 0,..., 0).) 1.24 We have (A∗ (a f + bg))(v) = (a f + bg)(Av) = a f (Av) + bg(Av) = a(A∗ f )(v) + b(A∗g)(v) = (a A∗ f + bA∗g)(v), so A∗ is linear. (The other axioms are just as straightforward.) 1.25 We have  A∗ e∗ j, ei  =  k  (A∗ )kj e∗ k , ei  =  k (A∗ )kj δki = (A∗ )i j, while  e∗ j, Aei  =  k  e∗ j, Aki ek  =  k Aki δ jk = Aji, so the matrix representing A∗ is just the transpose of the matrix representing A. 1.26 We have  A† e j, ei  =  k  (A† )kj ek , ei  =  k (A†)kj δki = (A†)i j, while  e∗ j, Aei  =  k  e∗ j, Aki ek  =  k Aki δ jk = Aji, which gives (A† )i j = Aji . 1.27 Let w =  i aivi (where not all the ai’s vanish) and suppose  i civi + cw = 0. The latter equation may be solved by choosing c = 1 and ci = −ai , so the set {v1,...,vn, w} is linearly dependent. Conversely, suppose {v1,...,vn, w} is linearly dependent. Then the equations  i civi + cw = 0 have a nontrivial solution (c, c1,..., cn). We must have c = 0 else the set {vi} is not linearly independent. But then w = − i(ci/c)vi . 1.28 Obviously, the monomials span V, so we need only check linear independence. Assume c0 + c1x + c2x 2 + c3x 3 = 0. Linear algebra 7 The zero on the right side represents the zero vector, namely the polynomial that is zero for all values of x. In other words, this equation must hold for all values of x. In particular, it must hold for x = 0. Plugging in gives c0 = 0. Next let x = 1 and x = −1, giving c1 + c2 + c3 = 0 and −c1 + c2 − c3 = 0. Adding and subtracting the latter two equations gives c2 = 0 and c1 +c3 = 0. Finally, choose x = 2 to get 2c1 +8c3 = 0. Combining this with c1 + c3 = 0 gives c1 = c3 = 0. 1.29 We must show exactness at each space. Clearly the sequence is exact at ker T , because the inclusion map ι : ker T → V is injective, so only zero gets sent to zero. By definition, the kernel of T is ker T , namely the image of ι, so the sequence is exact at V. Let π : W → coker T be the projection map onto the quotient W/ im T . Then by definition π kills everything in im T , so the sequence is exact at W. Finally, π is surjective onto the quotient, so the sequence is exact at coker T . 1.30 Write the exact sequence together with its maps 0 −−−→ V0 ϕ0 −−−→ V1 ϕ1 −−−→ · · · ϕn−1 −−−→ Vn −−−→ 0 and set ϕ−1 = ϕn = 0. By exactness, im ϕi−1 = ker ϕi . But the rank/nullity theorem gives dim Vi = dim ker ϕi + dim im ϕi . Hence,  i (−1) i dim Vi =  i (−1) i (dim ker ϕi + dim im ϕi) =  i (−1) i (dim im ϕi−1 + dim im ϕi) = 0, because the sum is telescoping. 1.31 An arbitrary term of the expansion of det A is of the form (−1) σ A1σ (1)A2σ (2) ... Anσ (n). (1) As each number from 1 to n appears precisely once among the set σ (1), σ (2), ... , σ (n), the product may be rewritten (after some rearrangement) as (−1) σ Aσ −1(1)1Aσ −1(2)2 ... Aσ −1(n)n, (2) where σ −1 is the inverse permutation to σ. For example, suppose σ (5) = 1. Then there would be a term in (1) of the form A5σ (5) = A51. This term appears first in (2), as σ −1(1) = 5. Since a permutation and its inverse both have the 8 Linear algebra same sign (because σ σ −1 = e implies (−1)σ (−1)σ −1 = 1), Equation (2) may be written (−1) σ −1 Aσ −1(1)1Aσ −1(2)2 ... Aσ −1(n)n. (3) Hence det A =  σ∈Sn (−1) σ −1 Aσ −1(1)1Aσ −1(2)2 ... Aσ −1(n)n. (4) As σ runs over all the elements of Sn, so does σ −1, so (4) may be written det A =  σ −1∈Sn (−1) σ −1 Aσ −1(1)1Aσ −1(2)2 ... Aσ −1(n)n. (5) But this is just det AT . 1.32 By (1.46) the coefficient of A11 in det A is  σ ∈Sn (−1) σ A2σ (2) ... Anσ (n), (1) where σ means a general permutation in Sn that fixes σ (1) = 1. But this means the sum in (1) extends over all permutations of the numbers {2, 3,..., n}, of which there are (n − 1)!. A moment’s reflection reveals that (1) is nothing more than the determinant of the matrix obtained from A by removing the first row and first column, namely A(1|1). Now consider a general element Ai j . What is its coefficient in det A? Well, consider the matrix A obtained from A by moving the ith row up to the first row. To get A we must execute i − 1 adjacent row flips, so det A = (−1)i−1 det A. Now consider the matrix A obtained from A by moving the jth column left to the first column. Again we have det A = (−1)j−1 det A . So det A = (−1)i+ j det A. The element Ai j appears in the (11) position in A, so by the reasoning used above, its coefficient in det A is just det A(1|1) = det A(i| j). Hence, the coefficient of Ai j in det A is (−1)i+ j det A(i| j) = A i j . Next consider the expression A11A 11 + A12A 12 +···+ A1n A 1n, (2) which is (1.57) with i = 1. Thinking of the Ai j as independent variables, each term in (2) is distinct (because, for example, only the first term contains A11, etc.). Moreover, each term appears in (2) precisely as it appears in det A (with the correct sign and correct products of elements of A). Finally, (2) contains n(n−1)! = n! terms, which is the number that appear in det A. So (2) must be Linear algebra 9 det A. As there was nothing special about the choice i = 1, (1.57) is proved. Equation (1.58) is proved similarly. 1.33 Suppose we begin with a matrix A and substitute for its ith row a new row of elements labeled Bi j , where j runs from 1 to n. Now, the cofactors of the Bi j in the new matrix are obviously the same as those of the Ai j in the old matrix, so we may write the determinant of the new matrix as, for instance, Bi1A i1 + Bi2A i2 +···+ Bin A in. (1) Of course, we could have substituted a new jth column instead, with similar results. If we were to let the Bi j be the elements of any row of A other than the ith, then the expression in Equation (1) would vanish, as the determinant of any matrix with two identical rows is zero. This gives us the following result: Ak1A i1 + Ak2A i2 +···+ Akn A in = 0, k = i. (2) Again, a similar result holds for columns. (The cofactors appearing in (1) are called alien cofactors, because they are the cofactors properly corresponding to the elements Ai j , j = 1,..., n, of the ith row of A rather than the kth row.) We may summarize (2) by saying that expansions in terms of alien cofactors vanish identically. Consider the ikth element of A(adj A): [A(adj A)]ik = n j=1 Ai j(adj A)jk = n j=1 Ai j A kj . If i = k this is an expansion in terms of alien cofactors and vanishes. If i = k then this is just the determinant of A. Hence [A(adj A)]ik = (det A)δik . This proves the first half. To prove the second half, note that (adj A)T = (adj AT ). That is, the transpose of the adjugate is the adjugate of the transpose. (Just trace back the definitions.) Hence, using the result (whose easy proof is left to the reader) that (AB)T = BT AT for any matrices A and B, [(adj A)A)] T = AT (adj A) T = AT adj AT = (det AT )I = (det A)I. (3) 1.34 By (1.59), A(adj A) = (adj A)A = (det A)I, so if A is nonsingular, then the inverse of A is adj A/ det A, and if A is invertible, then multiplying both sides of this equation by A−1 gives adj A = (det A)A−1 , which implies A is nonsingular (because the adjugate cannot 10 Linear algebra vanish identically). Next, suppose Av = 0. If A were invertible, then multiplying both sides of this equation by A−1 would give v = 0. So v is nontrivial if and only if A is not invertible, which holds if and only if det A = 0. 1.35 A is nonsingular, so x = A−1b = 1 det A(adj A)b. But expanding by the ith column gives det A(i) =  j bj A ji =  j (adj A)i jbj, and therefore xi = det A(i) det A . 1.36 From (1.57), ∂ ∂ A12 (det A) = ∂ ∂ A12 (A11A 11 + A12A 12 +···) = A 12, because A12 only appears in the second term. A similar argument shows that, in general, ∂ ∂ Ai j (det A) = A i j . But from (1.59), adj A = (det A)A−1 , so A i j = (adj A)ji = (det A)(A−1 )ji . 1.37 a. If T is an automorphism then it is surjective. Hence its rank equals dim V. b. If T is an automorphism then it is invertible. Suppose T −1 is represented by the matrix S. Then I = T T −1 is represented by the matrix T S. But any basis, the identity automorphism I is represented by the identity matrix I, so T S = I, which shows that T is invertible, and hence nonsingular. 1.38 a. Suppose {vi} is an orthonormal basis. Then g(Rvi, Rvj) = g(vi, vj) = δi j, whence we see that {Rvi} is again orthonormal. Conversely, if {T vi} is orthonormal, then g(T vi, T vj) = δi j = g(vi, vj). If v =  i aivi and w =  j bjvj then g(T v, T w) =  i j aibj g(T vi, T vj) =  i j aibj g(vi, vj) = g(v, w), so T is orthogonal. Linear algebra 11 b. By orthogonality of R, for any u, v ∈ V, g(v, w) = g(Rv, Rw) = g(R†Rv, w). It follows that R†R = I, where I is the identity map. (Just let v and w run through all the basis elements.) By the discussion following Exercise 1.26, R† is represented by RT , so RT R = I. As a left inverse must also be a right inverse, RRT = I. Tracing the steps backwards yields the converse. c. We have I = RT R, so by Exercise 1.31 and Equation (2.54), 1 = det RT det R = (det R)2. d. Let R be orthogonal so that RT R = I. In components, Rik Rjk = δi j . A priori this looks like n2 conditions (the number of entries in the identity matrix), but δi j is symmetric, so the independent conditions arise from those pairs (i, j) for which i ≤ j. To count these we observe that there are n pairs (i, j) with i = j, and n 2 = n(n − 1)/2 pairs with i < j. Adding these together gives n(n + 1)/2 constraints. Therefore the number of independent parameters is n − n(n + 1)/2 = n(n − 1)/2. 1.39 From (2.54) we get 1 = det I = det AA−1 = (det A)(det A−1 ), so det A−1 = (det A) −1 . 1.40 In our shorthand notation we can write Ae j =  i ei Ai j ⇒ Ae = eA, (1) and similarly, Ae j =  i e i A i j ⇒ Ae = e A . (2) Substituting into (2) we get AeS = eSA ⇒ Ae = eSA S−1 , so comparing with (1) (and using the fact that e is a basis) gives A = SA S−1 or A = S−1AS. 1.41 Assume A has n linearly independent eigenvectors {v1, v2,...,vn} with corresponding eigenvalues{λ1, λ2,...,λn}, and let S be a matrix whose columns are the vectors vi , i = 1,..., n. Then S is clearly nonsingular (because its rank is maximal), and multiplication reveals that AS = S, where  is the diagonal matrix diag(λ1,...,λn) with the eigenvalues of A along the 12 Linear algebra diagonal. It follows that S−1AS = . Conversely, if there exists a nonsingular matrix S such that S−1AS = , then as AS = S, the columns of S are the eigenvectors of A (which are linearly independent because S is nonsingular), and the diagonal elements of  are the eigenvalues of A. 1.42 The equation Av = λv holds if and only if (A − λI)v = 0, which has a nontrivial solution for v if and only if A−λI is singular, and this holds if and only if det(A − λI) = 0. So the roots of the characteristic polynomial are the eigenvalues of A. 1.43 Let pA(λ) = det(A − λI) be the characteristic polynomial of A. Then pS−1 AS = det(S−1AS − λI) = det(S−1 (A − λI)S) = (det S) −1 pA(λ) det S = pA. It follows that the eigenvalues of A are similarity invariants. 1.44 Let pA(λ) be the characteristic polynomial of A. Then we can write pA(λ) = (−1) n(λ − μ1)(λ − μ2)···(λ − μn), where the roots (eigenvalues) μi are not necessarily distinct. By expanding out the product we see that the constant term in this polynomial is the product of the eigenvalues, but the constant term is also pA(0) = det A. Again by expanding, we see that the coefficient of the term of order λn−1 is the sum of the eigenvalues times (−1)n−1. Now consider det(A − λI). Of all the terms in the Laplace expansion, only one contains n − 1 powers of λ, namely the product of all the diagonal elements. (In order to contain n − 1 powers of λ the term must contain at least n − 1 diagonal elements, which forces it to contain the last diagonal element as well.) But the product of all the diagonal elements is (A11 − λ)(A22 − λ)···(Ann − λ) = (−1) nλ

Meer zien Lees minder











Oeps! We kunnen je document nu niet laden. Probeer het nog eens of neem contact op met support.

Documentinformatie

Geüpload op
11 februari 2022
Aantal pagina's
169
Geschreven in
2021/2022
Type
Tentamen (uitwerkingen)
Bevat
Vragen en antwoorden

Onderwerpen

Voorbeeld van de inhoud

, Solution Manual
for
Manifolds, Tensors, and Forms


Paul Renteln
Department of Physics
California State University
San Bernardino, CA 92407
and
Department of Mathematics
California Institute of Technology
Pasadena, CA 91125
prenteln@csusb. edu

, Contents




1 Linear algebra page 1
2 Multilinear algebra 20
3 Differentiation on manifolds 33
4 Homotopy and de Rham cohomology 65
5 Elementary homology theory 77
6 Integration on manifolds 84
7 Vector bundles 90
8 Geometric manifolds 97
9 The degree of a smooth map 151
Appendix D Riemann normal coordinates 154
Appendix F Frobenius’ theorem 156
Appendix G The topology of electrical circuits 157
Appendix H Intrinsic and extrinsic curvature 158




iii

, 1
Linear algebra




1.1 We have

0 = c1 (1, 1) + c2 (2, 1) = (c1 + 2c2 , c1 + c2 )
⇒ c2 = −c1 ⇒ c1 − 2c1 = 0 ⇒ c1 = 0 ⇒ c2 = 0,

so (1, 1) and (2, 1) are linearly independent. On the other hand,

0 = c1 (1, 1) + c2 (2, 2) = (c1 + 2c2 , c1 + 2c2 )

can be solved by choosing c1 = 2 and c1 = −1, so (1, 1) and (2, 2) are
linearly dependent (because c1 and c2 are not necessarily zero).
1.2 Subtracting gives
  
0= vi ei − vi ei = (vi − vi )ei .
i i i

But the ei ’s are a basis for V , so they are linearly independent, which implies
vi − vi = 0.
1.3 Let V = U ⊕ W , and let E := {ei }i=1 n
be a basis for U and F := { f j }mj=1 a
basis for W . Define a collection of vectors G := {gk }n+m k=1 where gi = ei for
1 ≤ i ≤ n and gn+i = f i for 1 ≤ i ≤ m. Then the claim follows if we can
show G is a basis for V . To that end, assume

n+m 
n 
m
0= ci gi = ci ei + ci f i .
i=1 i=1 i=1

The first sum in the rightmost expression lives in U and the second sum lives
in W , so by the uniqueness property of direct sums, each sum must vanish
by itself. But then by the linear independence of E and F, all the constants
ci must vanish. Therefore G is linearly independent. Moreover, every vector
v ∈ V is of the form v = u + w for some u ∈ U and w ∈ W , each of which

1
€9,49
Krijg toegang tot het volledige document:

100% tevredenheidsgarantie
Direct beschikbaar na je betaling
Lees online óf als PDF
Geen vaste maandelijkse kosten

Maak kennis met de verkoper
Seller avatar
COURSEHERO2

Maak kennis met de verkoper

Seller avatar
COURSEHERO2 Maastricht University
Bekijk profiel
Volgen Je moet ingelogd zijn om studenten of vakken te kunnen volgen
Verkocht
4
Lid sinds
4 jaar
Aantal volgers
2
Documenten
82
Laatst verkocht
11 maanden geleden

0,0

0 beoordelingen

5
0
4
0
3
0
2
0
1
0

Recent door jou bekeken

Waarom studenten kiezen voor Stuvia

Gemaakt door medestudenten, geverifieerd door reviews

Kwaliteit die je kunt vertrouwen: geschreven door studenten die slaagden en beoordeeld door anderen die dit document gebruikten.

Niet tevreden? Kies een ander document

Geen zorgen! Je kunt voor hetzelfde geld direct een ander document kiezen dat beter past bij wat je zoekt.

Betaal zoals je wilt, start meteen met leren

Geen abonnement, geen verplichtingen. Betaal zoals je gewend bent via iDeal of creditcard en download je PDF-document meteen.

Student with book image

“Gekocht, gedownload en geslaagd. Zo makkelijk kan het dus zijn.”

Alisha Student

Veelgestelde vragen