Garantie de satisfaction à 100% Disponible immédiatement après paiement En ligne et en PDF Tu n'es attaché à rien 4.2 TrustPilot
logo-home
Resume

Summary Machine Learning

Vendu
8
Pages
38
Publié le
05-12-2023
Écrit en
2023/2024

Summary of all the lectures of Machine Learning. It contains all the relevant material needed for the final exam.

Établissement
Cours











Oups ! Impossible de charger votre document. Réessayez ou contactez le support.

École, étude et sujet

Établissement
Cours
Cours

Infos sur le Document

Publié le
5 décembre 2023
Nombre de pages
38
Écrit en
2023/2024
Type
Resume

Sujets

Aperçu du contenu

Summary Machine Learning

Lecture 1 – Introduction to Machine Learning

What is machine learning (ML) about?
 ML is about automation of problem solving.
 It is the study of computer algorithms that improve automatically through experience.
 Involves becoming better at a task T based on some experience E with respect to some
performance measure P
 Examples: spam detection, movie recommendation, speech recognition, credit risk analysis,
autonomous driving and medical diagnosis.

What does it involve?
 ML may involve a notion of generalization. Is it safe to assume that current observations can
be generalized to future observations?
 ML should be generalizable – should perform on unseen data / representative for real world
domain.
 Labeled data, objective, optimization algorithm (models), features/representations (columns),
and assumptions are some critical components.

Different types of learning
 Supervised learning: annotated/labelled dataset / ground truth
o Classification: discrete variable - predicting the price of a house
o Regression: continuous variable - spam detection
 Unsupervised learning: unlabeled dataset
o Clustering, association mining - customer segmentation, recommendation
 Semi supervised learning (only a portion of the data is labeled) - text classification
 Reinforcement learning: based on rewarding desired behaviors and/or punishing undesired
ones. (involves a feedback loop) - self driving car

Example - SPAM versus non-Spam
 Binary classification problem




Learning process
 Find examples of SPAM and non-SPAM
 Come up with a learning algorithm
 A learning algorithm infers rules from examples: If (A or B or C) and not D, then SPAM
 These rules can then be applied to new data (emails)

Learning algorithms

, See several different learning algorithms
 Implement simple 2-3 simple ones from scratch in Python
 Learn about Python libraries for ML (scikit-learn)
 How to apply them to real-world problems

Machine Learning – Examples
 Recognize handwritten numbers and letters
 Recognize faces in photos
 Determine whether text expresses positive, negative or no opinion
 Guess person’s age based on a sample of writing
 Flag suspicious credit-card transactions
 Recommend books and movies to users based on their own and others’ purchase history
 Recognize and label mentions of people’s or organization names in text

Types of learning problems: Regression
 Response: a (real) number
 Predict person’s age, predict price of a stock, predict student’s score on exam
 In regression, the response variable is predicted using a set of predictors that are believed to
have an influence on the response variable.

Types of learning problems: Binary classification
 Response: Yes/No answer
 Detect SPAM
 Predict polarity of product review: positive vs negative

Types of learning problems: Multiclass classification
Response: one of a finite set of options
 Classify newspaper article as
o politics, sports, science, technology, health, finance
 Detect species based on photo
o Passer domesticus, Calidris alba, Streptopelia decaocto, Corvus corax, …

Types of learning problems: Multilabel classification
Response: a finite set of Yes/No answers
 Assign songs to one or more genres
o rock, pop, metal
o hip-hop, rap
o jazz, blues
o rock, punk

Types of learning problems: Autonomous behavior
 Input: measurements from sensors – camera, microphone, radar, accelerometer,. . .
 Response: instructions for actuators – steering, accelerator, brake,

How well is the algorithm learning?
 Evaluation: Choose a baseline, choose a metric, compare!

, different tasks, different metrics

Predicting age – Regression
 Mean absolute error – the average (absolute) difference between true value and predicted
value (yn true value (ground truth), ^y n predicted value) - fails to punish large errors in
prediction as all errors are treated equally.


 Mean squared error – the average square of the difference between true value and predicted
value - more sensitive to outliers as the square amplifies the impact of large deviations.



Predicting spam - Classification


 Drawback: does not work well on imbalanced data. If the data is imbalanced the accuracy
will naturally be high.

Classification
Wrong classification
 False positive (FP) – Flagged as SPAM, but not non-SPAM
 False negative (FN) – Not flagged, but is SPAM
 False positives are a bigger issue for this problem! In the medical field this is the other way
around (False negative are the bigger issue). Minimizing one of the two depends thus on the
problem at hand.
Correct classification
 True positive (TP): Spam classified as spam
 True negative (TN): Not-spam classified as not-spam
Summarized in a confusion matrix (image on the right)
 Confusion matrix can be appended with more decision
classes.

Precision and Recall
 Metrics which focus on one kind of mistake. These are better for imbalanced data (together
with F1-score.
 Precision (positive predictive value (PPV)) – what fraction of flagged emails
were real SPAMs?
 Recall (sensitivity, hit rate, or true positive rate (TPR)) – what fraction of
real SPAMs were flagged?
 Specificity, selectivity or true negative rate (TNR) (usage not common)

Fβ-score
 F1 – score (F-measure): harmonic mean between precision and recall a kind
of average

,  Parameter β quantifies how much more we care about recall than precision,
when it is greater than 1, that means, recall is weighted more, when it is
smaller than 1, that means precision is weighted more.

Macro-average
 Precision and recall are usually calculated per decision class. Micro and
macro average are ways to aggregate the measures.
 Precision true positives over labeled positives; Recall, true positives over
actual positives. ((1), (2), (3), (4),(5)) represent the five data points.)
 Compute precision and recall per-class, and average: ex:

 Rare classes have the same impact as frequent classes (not ideal).
 Macro F1-Score is the harmonic mean of Macro-Precision and Macro-Recall.

Micro-average
 Micro averaging treats the entire set of data as an aggregate result, and calculates 1 metric
rather than k metrics that get averaged together.


 In micro averaging, we calculate one aggregate result for the entire data set (for precision and
recall and use these micro averaged precision and recall for the micro averaged F1).
 Micro-Average Precision and Recall are just the same values when there is one label, so is the
Micro Average F1-Score, and the accuracy.

How to find ^f ( x ) : A solution workflow
 Best outcome we can hope for: ^f ( x ) = f (x) for all x. Ideally, we would like ^f ( x ) such that a
loss between f(x) and ^f ( x ) is minimized, i.e., L( ^f ( x ) , f (x)) is small.
 The cost (average loss plus possibly a regularization term) in case of regression can be MSE
or MAE, computed over all values of x
 Problem 1 We do not have all values of x and f (x) (might not represent the whole population)
 Problem 2 We do not know how f (x) looks like (distribution)
 Compute loss on the data we have (empirical risk minimization) for MAE:



How to find ^f ( x )
 If ^f ( x ) = θx + c we assume a linear relationship


 For a more complex relationship, a polynomial function can also be used
 Choose a power p ^f ( x ) = c + θ1x + θ2x2 + . . . θpxp Higher p implies higher degree of
freedom/flexibility (and more fitted to the data -> risk of overfitting)



Changing p: overfitting / underfitting
€2,99
Accéder à l'intégralité du document:

Garantie de satisfaction à 100%
Disponible immédiatement après paiement
En ligne et en PDF
Tu n'es attaché à rien


Document également disponible en groupe

Reviews from verified buyers

Affichage de tous les avis
11 mois de cela

3,0

1 revues

5
0
4
0
3
1
2
0
1
0
Avis fiables sur Stuvia

Tous les avis sont réalisés par de vrais utilisateurs de Stuvia après des achats vérifiés.

Faites connaissance avec le vendeur

Seller avatar
Les scores de réputation sont basés sur le nombre de documents qu'un vendeur a vendus contre paiement ainsi que sur les avis qu'il a reçu pour ces documents. Il y a trois niveaux: Bronze, Argent et Or. Plus la réputation est bonne, plus vous pouvez faire confiance sur la qualité du travail des vendeurs.
bascrypto Tilburg University
S'abonner Vous devez être connecté afin de suivre les étudiants ou les cours
Vendu
323
Membre depuis
4 année
Nombre de followers
208
Documents
18
Dernière vente
2 semaines de cela

4,2

49 revues

5
25
4
13
3
7
2
2
1
2

Récemment consulté par vous

Pourquoi les étudiants choisissent Stuvia

Créé par d'autres étudiants, vérifié par les avis

Une qualité sur laquelle compter : rédigé par des étudiants qui ont réussi et évalué par d'autres qui ont utilisé ce document.

Le document ne convient pas ? Choisis un autre document

Aucun souci ! Tu peux sélectionner directement un autre document qui correspond mieux à ce que tu cherches.

Paye comme tu veux, apprends aussitôt

Aucun abonnement, aucun engagement. Paye selon tes habitudes par carte de crédit et télécharge ton document PDF instantanément.

Student with book image

“Acheté, téléchargé et réussi. C'est aussi simple que ça.”

Alisha Student

Foire aux questions