Clustering: a technique to find clusters in a data set from the observations of a data set by
partitioning them into different groups so that the observations within each subgroups are quite
homogeneous to each other, while observations in different clusters are quite different form
each other.
Clustering is a unsupervised problem that attempt to discover structure on the basis of a
dataset, without labels to use for training.
Hard clustering without a statistical model is the primary way discussed here.
Clustering has two best known approaches
1. K-means Clustering : seeking to partition the observations into a pre-specified number
of clusters
a. Example: Where customers spend time
2. Hierarchical Clustering : we do not know I advance how many clusters we want, and it
builds a tree-like visual representations: dendrogram
a. Example: product categorization
Choice depends on whether you think clusters are disjoint or have a hierarchical arrangement
12.4.1: K-Means Clustering
K-means Clustering: partitions observations into K distinct, non-overlapping clusters.
K-means Clustering’s clusters C1,..Ck must satisfy two conditions
1. Each observations belongs to at least one of K clusters
2. The clusters are non over-lapping: no observations belong to more than one cluster
Good clustering for K-means clustering: the sum of within-cluster-variations for cluster Ck is as
small as possible.
- Goal: minimizing the sum of the measure of within-cluster-variations: W(Ck) +
maximizing the sum of measure of inte-cluster-variations
- The unit of within-cluster variation: weighted sum of Euclidean distance of every
observations for every cluster
o Divided by the total number of observations in the Kth cluster
Input distance matrix
, Process: to find a local optimum out of the K^n ways of minimizing
1. Specify the desired number of clusters K
2. Randomly assign a number from 1 to K to each of the observations as an initial cluster
assignments
3. Iterate until the cluster assignments stop changing
a. For each of the K clusters, compute the cluster centroid as the mean of the
observations assigned to each cluster.
i. Kth cluster centroid: a vector of the p feature means for the observations
in the kth cluster
b. Assign each observations to the cluster whose centroid is closet in the mean of
Euclidean distance.
4. Run the algorithm multiple times from different random cluster assignments and select
the best solution as the local optimum
a. The performance of the result obtained will depend on the initial random cluster
assignment.