Linear Algebra and Optimization for Machine Learning by Charu C. Aggarwal
Chapter 1-11
abilities, analytical skills, and subject-specific knowledge. In this essay, we will explore the nature of exams in business, law, and mathematics, their format, and how they
evaluate students’ comprehension and application of core concepts. We will also discuss the similarities and differences in these exams and how they prepare
Contents
1 Linear Algebra and Optimization: An Introduction 1
2 Linear Transformations and Linear Systems 17
3 Diagonalizable Matrices and Eigenvectors 35
4 Optimization Basics: A Machine Learning View 47
5 Optimization Challenges and Advanced Solutions 57
6 Lagrangian Relaxation and Duality 63
7 Singular Value Decomposition 71
8 Matrix Factorization 81
9 The Linear Algebra of Similarity 89
10 The Linear Algebra of Graphs 95
11 Optimization in Computational Graphs 101
, Chapter 1
Linear Algebra and
Optimization: An Introduction
how they eval
abilities, analytical skills, and subject-specific knowledge. In this essay, we will explore the nature of exams in business, law, and mathematics, their format, and how they evaluate
students’ comprehension and application of core concepts. We will also discuss the similarities and differences in these exams and how they prepare uate students’ comprehension
and application of core concepts. We will also discuss the similarities and differences in these exams and how they prepare students for their respective
careers.________________________________________1. Business Exams1.1. Overview of Business EducationBusiness education prepares students
1. For any two vectors x and y, which are each of length a, show that (i) x — y is
orthogonal to x + y, and (ii) the dot product of x — 3y and x + 3y is negative.
(i) The first is simply x ·x —y ·y using the distributive property of matrix multiplication.
The dot product of a vector with itself is its squared length. Since both vectors are of
the same length, it follows that the result is 0. (ii) In the second case, one can use a
2 2
similar argument to show that the result is a — 9a , which is negative.
2. Consider a situation in which you have three matrices A, B, and C, of sizes 10 × 2,
2 × 10, and 10 × 10, respectively.
(a) Suppose you had to compute the matrix product ABC. From an efficiency per-
spective, would it computationally make more sense to compute (AB)C or would
it make more sense to compute A(BC)?
(b) If you had to compute the matrix product CAB, would it make more sense to
compute (CA)B or C(AB)?
The main point is to keep the size of the intermediate matrix as small as possible
in order to reduce both computational and space requirements. In the case of ABC,
it makes sense to compute BC first. In the case of CAB it makes sense to compute
CA first. This type of associativity property is used frequently in machine learning in
order to reduce computational requirements.
T
3. Show that if a matrix A satisfies A = A , then all the diagonal elements of the
matrix are 0.
T
Note that A + A = 0. However, this matrix also contains twice the diagonal elements
of A on its diagonal. Therefore, the diagonal elements of A must be 0.
T
4. Show that if we have a matrix satisfying A = A , then for any column vector x, we
T
have x Ax = 0.
T
Note that the transpose of the scalar x Ax remains unchanged. Therefore, we have
T T T T T T T
x Ax = (x Ax) = x A x = —x Ax. Therefore, we have 2x Ax = 0.
1
, T
5. Show that if we have a matrix A, which can be written as A = DD for some matrix
T
D, then we have x Ax ≥ 0 for any column vector x.
T T
The scalar x Ax can be shown to be equal to ||D x||2.
6. Show that the matrix product AB remains unchanged if we scale the ith column of A
and the ith row of B by respective factors that are inverses of each other.
The idea is to express the matrix multiplication as the sum of outer-products of
columns of A and rows of B.
AB = AkBk
k
Here, Ak is the kth column of A and Bk is the kth row of B. Note that the expression on
the right does not change if we multiply Ai by α and divide Bi by α. Each component
of the sum remains unchanged including the ith component, where the scaling factors
cancel each other out.
illustrate how business concepts are applied in real-world scenarios. Group discussions and practicing sample case studies can aid in refining analytical
skills.________________________________________2. Law Exams2.1. Overview of Law EducationLaw education provides students with the
knowledge of legal principles, case law, legal
7. Show that any matrix product AB can be expressed in the form A ×ΔB×, where A× is
a matrix in which the sum of the squares of the entries in each column is 1, B × is a
matrix in which the sum of the squares of the entries in each row is 1, and Δ is an
appropriately chosen diagonal matrix with nonnegative entries on the diagonal.
After expressing the matrix product as the sum of outer-products, we can scale each
vector in the outer-product to unit-norm, while pulling out a scalar multiple for the
outer-product component. The matrices A× and B× contain these normalized vectors,
whereas Δ contains these scalar multiples. In other words, consider the case, where
we have the product in the following form using the kth column Ai of A and the kth
row Bi of B:
AB = AkBk
k
One can express this matrix product in the following form:
Σ
AB = Ak Ak Bk
Bk
k ` ˛¸ x
δ k k
We create a diagonal matrix Δ in which the kth diagonal entry is δkk and then create
A× and B× as the normalized versions of A and B, respectively.
how they evaluate students’ comprehension and application of core concepts. We will also discuss the similarities and differences in these exams and how they
prepare students for their respective careers.________________________________________1. Business Exams1.1. Overview of Business EducationBusiness
education prepares students
8. Discuss how a permutation matrix can be converted to the identity matrix using at
most d elementary row operations of a single type. Use this fact to express A as the
product of at most d elementary matrix operators.
Only row interchange operations are required to convert it to the identity matrix.
In particular, in the ith iteration, we interchange the ith row of A with whatever
row contains the ith row of the identity matrix. A permutation matrix will always
contain such a row. This matrix can be represented as the product of at most d
elementary row interchange operators by treating each interchange operation as a
matrix multiplication.
9. Suppose that you reorder all the columns of an invertible matrix A using some random
1
permutation, and you know A− for the original matrix. Show how you can (simply)
2
, 1
compute the inverse of the reordered matrix from A− without having to invert the
new matrix from scratch. Provide an argument in terms of elementary matrices.
1
All the rows of A− are interchanged using exactly the same permutation as the
columns of A are permuted. This is because if P is the permutation matrix that
T 1 T
creates AP , then P A− is the inverse of AP . However, P performs exactly the
same reordering on the rows of A as P performs on the columns of A.
T
10. Suppose that you have approximately factorized an n d matrix D as D UV , where
U is an n k matrix and V is a d k matrix. Show how you can derive an infinite
T T T
number of alternative factorizations U ×V × of D, which satisfy UV = U ×V × .
1 T
Let P be any invertible matrix of size k k. Then, we set U × = UP , and V × = V (P − ) .
T T
It can be easily shown that UV = U ×V × .
11. Either prove each of the following statements or provide a counterexample:
(a) The order in which you apply two elementary row operations to a matrix does
not affect the final result.
(b) The order in which you apply an elementary row operation and an elementary
column operation does not affect the final result.
illustrate how business concepts are applied in real-world scenarios. Group discussions and practicing sample case studies can aid in refining analytical
skills.________________________________________2. Law Exams2.1. Overview of Law EducationLaw education provides students with the knowledge of legal
principles, case law, legal
It is best to think of these problems in terms of elementary matrix operations.
(a) If you start with the matrix A, then the two successive row operations correspond-
ing to matrices E1 and E2 create the matrix E2E1A. Note that matrix multiplication
is not commutative and this is not the same as E1E2A. For example, rotation matrices
do not commute with scaling matrices. Scaling the first row by 2 followed by inter-
changing the first and second rows creates a different result than the one obtained by
reversing these operations.
(b) In this case, if the row and column operators are Er and Ec, the final result is
ErAEc. Because of the associativity of matrix multiplication, (ErA)Ec and Er(AEc)
are the same. The result follows that the order does not matter.
12. Discuss why some power of a permutation matrix is always the identity matrix.
There are a finite number of permutations of a sequence. Therefore, after some number
k of repeated permutations by P , the sequence will be repeated. In other words we
k
have P = I.
how they evaluate students’ comprehension and application of core concepts. We will also discuss the similarities and differences in these exams and
how they prepare students for their respective careers.________________________________________1. Business Exams1.1. Overview of Business
EducationBusiness education prepares students
Σt i
13. Consider the matrix polynomial aiA . A straightforward evaluation of this poly-
2
nomial will require O(t ) matrix multiplications. Discuss how you can reduce the num-
ber of multiplications to O(t) by rearranging the polynomial.
Σt i 1
The matrix polynomial can be written as a0I + A( aiA − ). This can be further
expanded as follows:
t t
Σ i −1
Σ i 2
a0I + A( aiA ) = a0I + A(a1I + A( aiA − ))
i=1 i=2
t
i 2
= a0I + A(a1I + A(a2I + A( aiA − )))
i=3
Using this type of expansion recursively, one can obtain the desired result.
3