Module 1: Introductory Concepts
Missing values:
- Sometimes, we have instances that have missing values for some features.
- It is of paramount importance to deal with this situation before building any machine
learning or data mining model.
- Missing values might result from fields that are not always applicable, incomplete
measurements, lost values.
Imputation strategies for missing values
- The simplest strategy would be to remove the feature containing missing values. This
strategy is recommended when the majority of the instances have missing values for
that feature.
o However: There are situations in which we have a few features or the feature
we want to remove is deemed relevant.
- If we have scattered missing values and few features, we might want to remove the
instances having missing values.
o However: There are situations in which we have a limited number of
instances.
- The third strategy is the most popular. It consists of replacing the missing values for a
given feature with a representative value such as the mean, the median or the mode
of that feature.
o However: We need to be aware that we are introducing noise.
- Fancier strategies include estimating the missing values with a machine learning
model trained on the non-missing information.
o Remark: More about missing values will be covered in Statistics course.
Normalization
Between 0-1
,Standardization
With boundaries
Normalization vs Standardization
Correlation (question part of exam)
,X2 association measure
Symbolic feature = categorical feature like eye color
, Encoding categorical features
Some machine learning, data mining algorithms or platforms cannot operate with categorical
features → therefore we need to encode these features as numerical quantities.
1 Label encoding
- Assigning integer numbers to each category. It only makes sense if there is an ordinal
relationship among the categories.
o For example: Weekdays, months, rating etc.
2 One-hot encoding
Class imbalance
More accurate if you predict with the blue feature because of the more frequency.
Missing values:
- Sometimes, we have instances that have missing values for some features.
- It is of paramount importance to deal with this situation before building any machine
learning or data mining model.
- Missing values might result from fields that are not always applicable, incomplete
measurements, lost values.
Imputation strategies for missing values
- The simplest strategy would be to remove the feature containing missing values. This
strategy is recommended when the majority of the instances have missing values for
that feature.
o However: There are situations in which we have a few features or the feature
we want to remove is deemed relevant.
- If we have scattered missing values and few features, we might want to remove the
instances having missing values.
o However: There are situations in which we have a limited number of
instances.
- The third strategy is the most popular. It consists of replacing the missing values for a
given feature with a representative value such as the mean, the median or the mode
of that feature.
o However: We need to be aware that we are introducing noise.
- Fancier strategies include estimating the missing values with a machine learning
model trained on the non-missing information.
o Remark: More about missing values will be covered in Statistics course.
Normalization
Between 0-1
,Standardization
With boundaries
Normalization vs Standardization
Correlation (question part of exam)
,X2 association measure
Symbolic feature = categorical feature like eye color
, Encoding categorical features
Some machine learning, data mining algorithms or platforms cannot operate with categorical
features → therefore we need to encode these features as numerical quantities.
1 Label encoding
- Assigning integer numbers to each category. It only makes sense if there is an ordinal
relationship among the categories.
o For example: Weekdays, months, rating etc.
2 One-hot encoding
Class imbalance
More accurate if you predict with the blue feature because of the more frequency.