PM cheatsheet 1
Linear Algebra cheat sheet eigenvalues, eigenvectors, eigenspace nonsingular
1. Calculate eigenvalues by solving det (A − λI) = 0 An×n is nonsingular = invertible = regular iff:
Vectors 2. Any vector x that satisfies (A − λi I) x = 0 is eigenvector for There is a matrix B := A−1 such that AB = I = BA
dot product: u ∗ v = ||u|| ∗ ||v|| ∗ cos(φ) = u v + u v
λi . det(A) /= 0
3. EigA(λi) = {x ∈ Cn : (A − λi)x = 0} is eigenspace for λi.
uyvz − uzvy x x y y Ax = b has exactly one solution for each b
cross product: u × v = uzvx − uxvz definiteness The column vectors of A are linearly independent
uxvy − uyvx rank(A) = n
norms: pP n defined on n×n square matrices:
x p := p |xi |p f (x) = Ax is bijective (?)
∀λ ∈ σ(A).
λ > 0 ⇐⇒ positive-definite ⇒ det(A)−1 = det(A−1)
Pn i=1
x 1 := |xi | x ∞ = max| xi | λ ≥ 0 ⇐⇒ positive-semidefinite ⇒ (AT−1)−1
−1
= A
i=1 i ⇒ (A ) = (A−1)T
enclosed angle: λ < 0 ⇐⇒ negative-definite
λ ≤ 0 ⇐⇒ negative-semidefinite diagonalizable
u∗v
cosφ = if none true (positive and negative λ exist): indefinite An×n can be diagonalized iff:
q ||u|| ∗ ||v|| equivalent: eg. xT Ax > 0 ⇐⇒ positive-definite it has n linear independant eigenvectors
||u|| ∗ ||v|| = (u2+ u2 )(v2 + all eigenvalues are real and distinct
rank
v2)
Matrices
x y x y
there is an invertible T , such that:
Let A be a matrix and f (x) = Ax.
basic operations λ1
rank(A) = rank(f ) = dim(im(f ))
D := T −1AT =
= number of linearly independent column vectors of A
transpose: [AT]ij = [A]ji : ”mirror over main diagonal” = number of non-zero rows in A after applying Gauss ..
. λn
conjungate transpose / adjugate: A∗ = (A)T = AT
”transpose and complex conjugate all entries” kernel A = T −1DT and AT = T D
(same as transpose for real matrices) kern(A) = {x ∈ Rn : Ax = 0} (the set of vectors mapping to 0) λ1, . . . , λn are the eigenvalues of A!
multiply: AN ×M ∗ BR×K = MN ×K For nonsingular A this has one element and dim(kern(A)) = 0 (?) T can be created with eigenvectors of A and is nonsingular!
invert: a b−1 d −b
= 1 d −b trace diagonally dominant matrix
1
= det(A) P
c d −c a ad−bc −c a ∀i.|a | ≥ |a |
norm: Ax defined on n×n square matrices: tr(A) = a11 + a22 + · · · + ann ii j/ ij
p
A = max , induced by vector p-norm ⇒ nonsingular
=i
p
x/=0
x p (sum of the elements on the main diagonal)
p Hermitian
A 2 = λmPaxm(AT A) span A square matrix A where A∗ = A (equal to its adjugate)
A 1 = max i=1 |aij A real matrix is Hermitian iff symmetric
j P|,n
span(A)
Let v1, . . .=, v v1 +
{λ1be · + λrvrvectors
| λ1, . .of. ,A.
λr ∈ ⇒ ℑ(det(A)) = 0 (determinante is real)
A = max |aij |, r the· ·column
∞ j=1 triangula
i R}
Then:
condition: cond(A) = A · A−1 r
A square matrix is
right triangular (wlog n = 3):
determinants spectrum a11 a12 a13
σ(A) = {λ ∈ C : λ is eigenvalue of A}
P Qn 00 a220 aa
2333
det(A) = σ∈S sgn(σ) i=1 Ai,σi
properties
For 3×3 matrices
n (Sarrus rule): ⇒ Eigenvalues on main diagonale
square: N × N
symmetric: A = AT idempotent
diagonal: 0 except A square matrix A for which AA = A.
akk block matrices
Let B, C be submatrices, and A, D square submatrices. Then:
⇒ implies triangular (eigenvalues on main diagonale)
orthogonal A 0 A B
AT = A−1 ⇒ normal and diagonalizable det = det = det(A) det(D)
C D 0 D
unitary
arithmetic rules: det(A · B) = det(A) · det(B) det(A−1) = det(A)
about:blan 1/
k 3