Bagging with qualitative
Majority vote: most commonly occurring among the predictions
Although the collection of bagged trees is much more difficult to interpret than a single tree, one can
obtain an overall summary of the importance of each predictor using the RSS (for bagging regression
trees) or the Gini index (for bagging classification trees).
Random Forests
As in bagging, we build trees based on bootstrap
improvement over bagged trees by decorrelating the trees. Each time a split is considered, a random
sample of M predictors is chosen as split candidates from the full set of predictors.
(mostly it’s sqrt(p))
Boosting (check video)
Boosting work sequentially. Each tree is grown using info from previously grown trees.
Does not use bootstrap but each tree is fit on a modified version of the original data set
Slowly improve F in areas where it does not perform well.
3 tuning parameters
- Number of trees B, can overfit is B is too large
- Shrinkage parameter lambda
- Number d of splits in each tree.
Chapter 9
Support Vector Machines
Hyperplane is in dimensions p-1
We can use f(x) to decide in which class an observation is and we can use the magnitude to see how
far it lies from the hyperplane.
Maximal margin classifier hyperplane
(=optimal separating hyperplane)
Smallest distance from a point to the hyperplane is called the margin. Maximal margin classifier takes
the hyperplane that maximizes the distance.
Can lead to overfitting when p is large
Support vectors
Picture shows 3 support vectors
If these moved slightly, the hyperplane would change
Dotted line is the margin
, Non-separable case
Using so called soft-margins, we can develop a hyperplane the almost separates the classes, this
generalization is called the support vector classifier
Support vector classifier
• Greater robustness to individual observations
• Better classification of most of the training observations.
We allow some observations to be on the wrong side of the margin or hyperplane
slack variables allow observations to be on the wrong side
Has a budget (C) that gives an amount that the margin can be violated
Generally chosen via CV
Only observations that lie directly on the margin or on the wrong side of the margin are support
vectors. These observations affect the support vector classifier.
If C is small: fewer support vector, low bias, high variance
Support vector Machines
if the relationship is non-linear we can use cubic or polynomial functions
Support vector machines is an extension of the support vector classifier that results from enlarging
the feature space in a specific way, using Kernels
A kernel is a function that quantifies the similarity of two observations.
Linear kernel
polynomial kernel
Radial kernel
(very local behaviour)
SVMs with more than 2 classes
One-versus-one classification
comparing one of the K classes to the remaining K - 1 classes.
Majority vote: most commonly occurring among the predictions
Although the collection of bagged trees is much more difficult to interpret than a single tree, one can
obtain an overall summary of the importance of each predictor using the RSS (for bagging regression
trees) or the Gini index (for bagging classification trees).
Random Forests
As in bagging, we build trees based on bootstrap
improvement over bagged trees by decorrelating the trees. Each time a split is considered, a random
sample of M predictors is chosen as split candidates from the full set of predictors.
(mostly it’s sqrt(p))
Boosting (check video)
Boosting work sequentially. Each tree is grown using info from previously grown trees.
Does not use bootstrap but each tree is fit on a modified version of the original data set
Slowly improve F in areas where it does not perform well.
3 tuning parameters
- Number of trees B, can overfit is B is too large
- Shrinkage parameter lambda
- Number d of splits in each tree.
Chapter 9
Support Vector Machines
Hyperplane is in dimensions p-1
We can use f(x) to decide in which class an observation is and we can use the magnitude to see how
far it lies from the hyperplane.
Maximal margin classifier hyperplane
(=optimal separating hyperplane)
Smallest distance from a point to the hyperplane is called the margin. Maximal margin classifier takes
the hyperplane that maximizes the distance.
Can lead to overfitting when p is large
Support vectors
Picture shows 3 support vectors
If these moved slightly, the hyperplane would change
Dotted line is the margin
, Non-separable case
Using so called soft-margins, we can develop a hyperplane the almost separates the classes, this
generalization is called the support vector classifier
Support vector classifier
• Greater robustness to individual observations
• Better classification of most of the training observations.
We allow some observations to be on the wrong side of the margin or hyperplane
slack variables allow observations to be on the wrong side
Has a budget (C) that gives an amount that the margin can be violated
Generally chosen via CV
Only observations that lie directly on the margin or on the wrong side of the margin are support
vectors. These observations affect the support vector classifier.
If C is small: fewer support vector, low bias, high variance
Support vector Machines
if the relationship is non-linear we can use cubic or polynomial functions
Support vector machines is an extension of the support vector classifier that results from enlarging
the feature space in a specific way, using Kernels
A kernel is a function that quantifies the similarity of two observations.
Linear kernel
polynomial kernel
Radial kernel
(very local behaviour)
SVMs with more than 2 classes
One-versus-one classification
comparing one of the K classes to the remaining K - 1 classes.