Midterm - Linear Algebra
Marrit
March 29, 2025
Chapter 1: Matrices and Systems of Equations
1.1: Systems of Linear Equations
Linear system: of m equations in n unknowns is a system with m rows, and n+1 columns, where the nth column is the
last one with variables.
(In)consistent: Inconsistent is no solutions, at least one solution (or solution set) is consistent. A consistent system can
only have exactly one or infinitely many solutions.
Something is consistent if:
• Ax = b is consistent if b can be written as a linear combination of column vectors of A
• If it can be written as the strictly triangular form
It is a homogenous system if Ax = 0.
Equivalent Systems: if they have the same solution set, two systems involving the same variables is consistent.
Possible row operations:
• The order of two equations (I)
• Multiplication by a non-zero real number
• A multiple of another equation added/subtracted by another
Strict triangular form: if in the kth equation, the coefficients of the first k-1 variables are all zero and the coefficient
of xk is non-zero. (i.e.: first row = first variable non-zero, fourth row = fourth variable non-zero). These forms are easier
to solve since one is able to apply back substitution. If there is no unique solution, it is not possible to reduce the system to
this form.
Square means that #rows = #columns
Coefficient matrix: the coefficients without the b (otherwise augmented matrix)
1.2: Row Echelon Form
Requirements to be in row echelon form:
• First entry in a non-zero row is 1
• If it is not entirely made of zeroes, the amount of leading zeroes is larger than the above rows.
• All zero rows are collected at the bottom
NOTE: Reduced Row Echelon Form: First non-zero entry in each row is the only non-zero entry in its column: Gauss-
Jordan reduction
Gaussian elimination: process of using row operations to transform a linear system into an augmented matrix in row
echelon form.
Over/underdetermined: more equations than unknowns (note: usually inconsistent, but not always). Underdeter-
mined has fewer equations than unknowns: can be either inconsistent or infinite solutions.
0000|1 = inconsistent.
Homogenous systems: system of equations is homogenous if the constants on the RHS are zero. These are always
consistent, since the variables just need to be zero. This is only possible if it has more/equal equations and unknowns,
otherwise there are free variables.
1.3: Matrix Arithmetic
Scalars: are just numbers. aij is a spot in the matrix where i is the row and j the column. An mxn matrix will have amn as
most right/down component.
Row and Column vector: Row = 1xn, column = n(x1)
Equal: two matrices are equal, if they’re of the same size and each component is the equal to the corresponding
component. (aij = bij .
Scalar multiplication: scalar multiplies with each component, order does not matter.
Matrix addition: adding each component to their corresponding component. Subtraction is the exact same. Adding a
zero-matrix does not have any effect, neither does the order of addition/subtraction.
When multiplying a sum, it is the same as multiplying each part of the sum, and summing that. This also applies to
scalar multiplication of a sum.
, Matrix multiplication Each component of the resultant matrix is the sum of each product of a row with the corresponding
column. Order matters: reading from left to right. The multiplying matrices would need to have as many elements in matrix
one’s rows as in matrix two’s columns. The resultant matrix will have a shape of mxr if mxn and nxr are multiplied.
The order matters, but when in the same order, multiple matrices do not need to be multiplied from left to right.
Linear combination: the sum of vectors multiplied with some scalar (c1 a1 + c2 a2 + ... + cn an ).
Transpose: the transpose of a matrix is where aij = bji , or in other words, each row becomes a column.
Symmetric: a nxn matrix is symmetric if the transpose is the same as the normal matrix.
1.4: Matrix Algebra
Identity matrix: matrix with only 1’s on the diagonal, and does not change the matrix via multiplication. Can be written
as: I = (e1 , e2 , ..., en ).
Matrix inversion: a matrix is nonsingular or invertible if there exists an AB = BA = I, which means that B is the
multiplicative inverse of A. Only square matrices have inverses. The product of two inverses is the inverse of the reverse
product.
Properties of transposes:
• Double transpose = no transpose
• Order of scalar multiplication/transposing does not matter
• Transpose of a sum = sum of two transposed
• Transpose of a product is the reverse order product of two transposes
1.5: Elementary Matrices
One can adjust equations by multiplying both sides by nonsingular matrices, where both get on the most left side of their
part of the equation, since this is where the last product in the order resides. Multiplying with an inverse would undo it
again. This way you can solve a system: Ax = b ⇒ x = A−1 b
A sequence of elementary matrices could be used to both sides of the equation to transform a system.
Each type of elementary operation corresponds to an elementary matrix. Type 1 is interchanging rows, where in the
elementary matrix those rows are also exchanged:
0 1 0 1 0 0 1 0 3
T ype I f or R1 ⇒ R2 = 1 0 0 T ype II f or3 ∗ R3 = 0 1 0 T ype III f or R1 + 3 ∗ R3 = 0 1 0 (1)
0 0 1 0 0 3 0 0 1
The inverse of an elementary matrix is the same type of elementary matrix.
Row Equivalent: when there exists a finite sequence of elementary matrices such that A will become B. Corollaries: if
A is row equivalent to B, then also the other way around. If there exists a C where A is row equivalent to, then so is B.
Nonsingularity: can be checked via that Ax = 0 only has the trivial solution of the zero-vector (otherwise zero-row),
and A is row equivalent to I. IFF the determinant is zero.
Unique solution: iff A is nonsingular, inverse of A exists. You can get the inverse by using (A|I) → (I|A−1 ).
Upper and lower triangular: whether a triangular matrix is only filled below or above the diagonal.
Diagonal: only the diagonal has non-zero entries.
Triangular Factorization: the factor of all elementary matrices that were needed to convert A to be strictly triangular,
only if only the row operation III is used (adding multiples of another row). The resultant matrix will have a diagonal of
1’s, and be the opposite triangular (unit upper/lower triangular ).
1.6: Partitioned Matrices
Inner or scalar product: Each component times each other, and the sum of those products.
Outer product Expansion: Each component of the column vector multiplied with each of the components of the row
vector, each resulting in a new component in the matrix. nx1 times 1xn will result in an nxn matrix.
Chapter 2: Determinants
2.1: Determinant of a matrix
If a determinant is zero, then there is no inverse.
Different cases:
Determinant of 1x1: just the component
Determinant of 2x2: a11 a22 − a12 a21
Determinant of 3x3 or more: det(A) = aij ∗ (−1)i+j ∗ det(Mij ) = number * cofactor = number * +- 1 * det(minor)
Another method: A−1 = det(A)1
adj(A), from which follows that det(A) cannot be zero for nonsingular matrices.
Implications of determinants:
• Any row or column can be used for the cofactor expansion
• Transposing a matrix has no influence on the determinant (ties into previous point)
Marrit
March 29, 2025
Chapter 1: Matrices and Systems of Equations
1.1: Systems of Linear Equations
Linear system: of m equations in n unknowns is a system with m rows, and n+1 columns, where the nth column is the
last one with variables.
(In)consistent: Inconsistent is no solutions, at least one solution (or solution set) is consistent. A consistent system can
only have exactly one or infinitely many solutions.
Something is consistent if:
• Ax = b is consistent if b can be written as a linear combination of column vectors of A
• If it can be written as the strictly triangular form
It is a homogenous system if Ax = 0.
Equivalent Systems: if they have the same solution set, two systems involving the same variables is consistent.
Possible row operations:
• The order of two equations (I)
• Multiplication by a non-zero real number
• A multiple of another equation added/subtracted by another
Strict triangular form: if in the kth equation, the coefficients of the first k-1 variables are all zero and the coefficient
of xk is non-zero. (i.e.: first row = first variable non-zero, fourth row = fourth variable non-zero). These forms are easier
to solve since one is able to apply back substitution. If there is no unique solution, it is not possible to reduce the system to
this form.
Square means that #rows = #columns
Coefficient matrix: the coefficients without the b (otherwise augmented matrix)
1.2: Row Echelon Form
Requirements to be in row echelon form:
• First entry in a non-zero row is 1
• If it is not entirely made of zeroes, the amount of leading zeroes is larger than the above rows.
• All zero rows are collected at the bottom
NOTE: Reduced Row Echelon Form: First non-zero entry in each row is the only non-zero entry in its column: Gauss-
Jordan reduction
Gaussian elimination: process of using row operations to transform a linear system into an augmented matrix in row
echelon form.
Over/underdetermined: more equations than unknowns (note: usually inconsistent, but not always). Underdeter-
mined has fewer equations than unknowns: can be either inconsistent or infinite solutions.
0000|1 = inconsistent.
Homogenous systems: system of equations is homogenous if the constants on the RHS are zero. These are always
consistent, since the variables just need to be zero. This is only possible if it has more/equal equations and unknowns,
otherwise there are free variables.
1.3: Matrix Arithmetic
Scalars: are just numbers. aij is a spot in the matrix where i is the row and j the column. An mxn matrix will have amn as
most right/down component.
Row and Column vector: Row = 1xn, column = n(x1)
Equal: two matrices are equal, if they’re of the same size and each component is the equal to the corresponding
component. (aij = bij .
Scalar multiplication: scalar multiplies with each component, order does not matter.
Matrix addition: adding each component to their corresponding component. Subtraction is the exact same. Adding a
zero-matrix does not have any effect, neither does the order of addition/subtraction.
When multiplying a sum, it is the same as multiplying each part of the sum, and summing that. This also applies to
scalar multiplication of a sum.
, Matrix multiplication Each component of the resultant matrix is the sum of each product of a row with the corresponding
column. Order matters: reading from left to right. The multiplying matrices would need to have as many elements in matrix
one’s rows as in matrix two’s columns. The resultant matrix will have a shape of mxr if mxn and nxr are multiplied.
The order matters, but when in the same order, multiple matrices do not need to be multiplied from left to right.
Linear combination: the sum of vectors multiplied with some scalar (c1 a1 + c2 a2 + ... + cn an ).
Transpose: the transpose of a matrix is where aij = bji , or in other words, each row becomes a column.
Symmetric: a nxn matrix is symmetric if the transpose is the same as the normal matrix.
1.4: Matrix Algebra
Identity matrix: matrix with only 1’s on the diagonal, and does not change the matrix via multiplication. Can be written
as: I = (e1 , e2 , ..., en ).
Matrix inversion: a matrix is nonsingular or invertible if there exists an AB = BA = I, which means that B is the
multiplicative inverse of A. Only square matrices have inverses. The product of two inverses is the inverse of the reverse
product.
Properties of transposes:
• Double transpose = no transpose
• Order of scalar multiplication/transposing does not matter
• Transpose of a sum = sum of two transposed
• Transpose of a product is the reverse order product of two transposes
1.5: Elementary Matrices
One can adjust equations by multiplying both sides by nonsingular matrices, where both get on the most left side of their
part of the equation, since this is where the last product in the order resides. Multiplying with an inverse would undo it
again. This way you can solve a system: Ax = b ⇒ x = A−1 b
A sequence of elementary matrices could be used to both sides of the equation to transform a system.
Each type of elementary operation corresponds to an elementary matrix. Type 1 is interchanging rows, where in the
elementary matrix those rows are also exchanged:
0 1 0 1 0 0 1 0 3
T ype I f or R1 ⇒ R2 = 1 0 0 T ype II f or3 ∗ R3 = 0 1 0 T ype III f or R1 + 3 ∗ R3 = 0 1 0 (1)
0 0 1 0 0 3 0 0 1
The inverse of an elementary matrix is the same type of elementary matrix.
Row Equivalent: when there exists a finite sequence of elementary matrices such that A will become B. Corollaries: if
A is row equivalent to B, then also the other way around. If there exists a C where A is row equivalent to, then so is B.
Nonsingularity: can be checked via that Ax = 0 only has the trivial solution of the zero-vector (otherwise zero-row),
and A is row equivalent to I. IFF the determinant is zero.
Unique solution: iff A is nonsingular, inverse of A exists. You can get the inverse by using (A|I) → (I|A−1 ).
Upper and lower triangular: whether a triangular matrix is only filled below or above the diagonal.
Diagonal: only the diagonal has non-zero entries.
Triangular Factorization: the factor of all elementary matrices that were needed to convert A to be strictly triangular,
only if only the row operation III is used (adding multiples of another row). The resultant matrix will have a diagonal of
1’s, and be the opposite triangular (unit upper/lower triangular ).
1.6: Partitioned Matrices
Inner or scalar product: Each component times each other, and the sum of those products.
Outer product Expansion: Each component of the column vector multiplied with each of the components of the row
vector, each resulting in a new component in the matrix. nx1 times 1xn will result in an nxn matrix.
Chapter 2: Determinants
2.1: Determinant of a matrix
If a determinant is zero, then there is no inverse.
Different cases:
Determinant of 1x1: just the component
Determinant of 2x2: a11 a22 − a12 a21
Determinant of 3x3 or more: det(A) = aij ∗ (−1)i+j ∗ det(Mij ) = number * cofactor = number * +- 1 * det(minor)
Another method: A−1 = det(A)1
adj(A), from which follows that det(A) cannot be zero for nonsingular matrices.
Implications of determinants:
• Any row or column can be used for the cofactor expansion
• Transposing a matrix has no influence on the determinant (ties into previous point)