1) The data analytics process
Focus on analytics from business perspective using :
1. Data
2. An algorithm
3. A purpose
Which are:
- Valid
- Useful
- Unexpected
- Understandable
Non-tabular data : featurization, deep learning
Supervised algorithms: need target classification(categorical) /
regression(continuous)
ML all about generalizable correlation
Unsupervised algorithms: extract patterns from data as is:
- Clustering
- Association/sequence/ .. rule mining
- Anomaly detection
- Dimensionality reduction
Purpose:
- Exploratory
- Descriptive
- Explanatory
- Predictive
- Prescriptive
MLOps: set of techniques and practices used to design, build and deploy machine
learning models in an efficient, optimized and organized manner.
Key phases of data analytics process
2) Data
preprocessing
Goal : obtaining tabular dataset
,Data selection:
- Flattening
- Target variable definition
- Hold-out set
o Data leakage: not including variables that are too perfectly
correlated to target ( > 0.8 )
Feature leakage
Instance leakage
- ~data exploration
Data cleaning:
- Basic consistency: detect errors/duplications, data transformation, remove
“future variables”
- Dealing with missing values (delete, replace or keep)
- Outliers: Valid/ invalid
o Detection vs. treatment
Data transformation
- Standardization, normalization (feature scaling), categorization (binning,
grouping)
- Dummy variables and encoding (nominal continuous)
o WoEcat = ln(pc1,cat / pc2,cat )
Monotonic relationship with target variable, well suited for
logistic regression
Maximize IV, laplace smoothing!!
- Feature engineering: delta’s, trends, windows (continuous stream of
measurements) (FRM)
- Feature reduction:
o PCA (dimensionality) categorical features into numerical
Correlated variables into set of linearly uncorrelated variables
maximize variance and preserve large pairwise distances
Interpretability might become more difficult
o t-SNE (dimensionality) preserve local similarities
Non-linear reduction based on manifold learning
Best for high dimensional data
=/= clustering
- Feature selection: filtering, wrapping
o Filtering: independent of model, throw out weak features (variance
threshold based, chi-squared based (Goodness-of-fit), information
gain)
o Wrapping: evaluate subsets of features making use of learned model
for each subset
Exhaustive search
Greedy strategies: forward selection, backward elimination,
step-wise
, 3) Exploratory data analysis
Data profiling: what? Why?
Human perception:
- Pre-attentive vision: limited set of
properties that are detected:
o Very rapidly
o Accurately
o With little effort
o Before focused attention
Gestalt principles
o Past experience (isomorphism)
Human limitations:
- Visual accuracy = perceptual
effectiveness
o Advanced perceptual abilities, don’t make people think too much
- Color blindness
- Short-term memory (humans have little memory)
- Attention span
What makes good visualization?
- Trustworthy
- Actionable (only simple graphs)
- Elegant (Data-ink ratio = Data-ink / total ink used to print graphic = 1 –
proportion that can be erased)
Focus on analytics from business perspective using :
1. Data
2. An algorithm
3. A purpose
Which are:
- Valid
- Useful
- Unexpected
- Understandable
Non-tabular data : featurization, deep learning
Supervised algorithms: need target classification(categorical) /
regression(continuous)
ML all about generalizable correlation
Unsupervised algorithms: extract patterns from data as is:
- Clustering
- Association/sequence/ .. rule mining
- Anomaly detection
- Dimensionality reduction
Purpose:
- Exploratory
- Descriptive
- Explanatory
- Predictive
- Prescriptive
MLOps: set of techniques and practices used to design, build and deploy machine
learning models in an efficient, optimized and organized manner.
Key phases of data analytics process
2) Data
preprocessing
Goal : obtaining tabular dataset
,Data selection:
- Flattening
- Target variable definition
- Hold-out set
o Data leakage: not including variables that are too perfectly
correlated to target ( > 0.8 )
Feature leakage
Instance leakage
- ~data exploration
Data cleaning:
- Basic consistency: detect errors/duplications, data transformation, remove
“future variables”
- Dealing with missing values (delete, replace or keep)
- Outliers: Valid/ invalid
o Detection vs. treatment
Data transformation
- Standardization, normalization (feature scaling), categorization (binning,
grouping)
- Dummy variables and encoding (nominal continuous)
o WoEcat = ln(pc1,cat / pc2,cat )
Monotonic relationship with target variable, well suited for
logistic regression
Maximize IV, laplace smoothing!!
- Feature engineering: delta’s, trends, windows (continuous stream of
measurements) (FRM)
- Feature reduction:
o PCA (dimensionality) categorical features into numerical
Correlated variables into set of linearly uncorrelated variables
maximize variance and preserve large pairwise distances
Interpretability might become more difficult
o t-SNE (dimensionality) preserve local similarities
Non-linear reduction based on manifold learning
Best for high dimensional data
=/= clustering
- Feature selection: filtering, wrapping
o Filtering: independent of model, throw out weak features (variance
threshold based, chi-squared based (Goodness-of-fit), information
gain)
o Wrapping: evaluate subsets of features making use of learned model
for each subset
Exhaustive search
Greedy strategies: forward selection, backward elimination,
step-wise
, 3) Exploratory data analysis
Data profiling: what? Why?
Human perception:
- Pre-attentive vision: limited set of
properties that are detected:
o Very rapidly
o Accurately
o With little effort
o Before focused attention
Gestalt principles
o Past experience (isomorphism)
Human limitations:
- Visual accuracy = perceptual
effectiveness
o Advanced perceptual abilities, don’t make people think too much
- Color blindness
- Short-term memory (humans have little memory)
- Attention span
What makes good visualization?
- Trustworthy
- Actionable (only simple graphs)
- Elegant (Data-ink ratio = Data-ink / total ink used to print graphic = 1 –
proportion that can be erased)