100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Summary

Summary Machine Learning

Rating
3.0
(1)
Sold
8
Pages
38
Uploaded on
05-12-2023
Written in
2023/2024

Summary of all the lectures of Machine Learning. It contains all the relevant material needed for the final exam.

Institution
Course











Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Study
Course

Document information

Uploaded on
December 5, 2023
Number of pages
38
Written in
2023/2024
Type
Summary

Subjects

Content preview

Summary Machine Learning

Lecture 1 – Introduction to Machine Learning

What is machine learning (ML) about?
 ML is about automation of problem solving.
 It is the study of computer algorithms that improve automatically through experience.
 Involves becoming better at a task T based on some experience E with respect to some
performance measure P
 Examples: spam detection, movie recommendation, speech recognition, credit risk analysis,
autonomous driving and medical diagnosis.

What does it involve?
 ML may involve a notion of generalization. Is it safe to assume that current observations can
be generalized to future observations?
 ML should be generalizable – should perform on unseen data / representative for real world
domain.
 Labeled data, objective, optimization algorithm (models), features/representations (columns),
and assumptions are some critical components.

Different types of learning
 Supervised learning: annotated/labelled dataset / ground truth
o Classification: discrete variable - predicting the price of a house
o Regression: continuous variable - spam detection
 Unsupervised learning: unlabeled dataset
o Clustering, association mining - customer segmentation, recommendation
 Semi supervised learning (only a portion of the data is labeled) - text classification
 Reinforcement learning: based on rewarding desired behaviors and/or punishing undesired
ones. (involves a feedback loop) - self driving car

Example - SPAM versus non-Spam
 Binary classification problem




Learning process
 Find examples of SPAM and non-SPAM
 Come up with a learning algorithm
 A learning algorithm infers rules from examples: If (A or B or C) and not D, then SPAM
 These rules can then be applied to new data (emails)

Learning algorithms

, See several different learning algorithms
 Implement simple 2-3 simple ones from scratch in Python
 Learn about Python libraries for ML (scikit-learn)
 How to apply them to real-world problems

Machine Learning – Examples
 Recognize handwritten numbers and letters
 Recognize faces in photos
 Determine whether text expresses positive, negative or no opinion
 Guess person’s age based on a sample of writing
 Flag suspicious credit-card transactions
 Recommend books and movies to users based on their own and others’ purchase history
 Recognize and label mentions of people’s or organization names in text

Types of learning problems: Regression
 Response: a (real) number
 Predict person’s age, predict price of a stock, predict student’s score on exam
 In regression, the response variable is predicted using a set of predictors that are believed to
have an influence on the response variable.

Types of learning problems: Binary classification
 Response: Yes/No answer
 Detect SPAM
 Predict polarity of product review: positive vs negative

Types of learning problems: Multiclass classification
Response: one of a finite set of options
 Classify newspaper article as
o politics, sports, science, technology, health, finance
 Detect species based on photo
o Passer domesticus, Calidris alba, Streptopelia decaocto, Corvus corax, …

Types of learning problems: Multilabel classification
Response: a finite set of Yes/No answers
 Assign songs to one or more genres
o rock, pop, metal
o hip-hop, rap
o jazz, blues
o rock, punk

Types of learning problems: Autonomous behavior
 Input: measurements from sensors – camera, microphone, radar, accelerometer,. . .
 Response: instructions for actuators – steering, accelerator, brake,

How well is the algorithm learning?
 Evaluation: Choose a baseline, choose a metric, compare!

, different tasks, different metrics

Predicting age – Regression
 Mean absolute error – the average (absolute) difference between true value and predicted
value (yn true value (ground truth), ^y n predicted value) - fails to punish large errors in
prediction as all errors are treated equally.


 Mean squared error – the average square of the difference between true value and predicted
value - more sensitive to outliers as the square amplifies the impact of large deviations.



Predicting spam - Classification


 Drawback: does not work well on imbalanced data. If the data is imbalanced the accuracy
will naturally be high.

Classification
Wrong classification
 False positive (FP) – Flagged as SPAM, but not non-SPAM
 False negative (FN) – Not flagged, but is SPAM
 False positives are a bigger issue for this problem! In the medical field this is the other way
around (False negative are the bigger issue). Minimizing one of the two depends thus on the
problem at hand.
Correct classification
 True positive (TP): Spam classified as spam
 True negative (TN): Not-spam classified as not-spam
Summarized in a confusion matrix (image on the right)
 Confusion matrix can be appended with more decision
classes.

Precision and Recall
 Metrics which focus on one kind of mistake. These are better for imbalanced data (together
with F1-score.
 Precision (positive predictive value (PPV)) – what fraction of flagged emails
were real SPAMs?
 Recall (sensitivity, hit rate, or true positive rate (TPR)) – what fraction of
real SPAMs were flagged?
 Specificity, selectivity or true negative rate (TNR) (usage not common)

Fβ-score
 F1 – score (F-measure): harmonic mean between precision and recall a kind
of average

,  Parameter β quantifies how much more we care about recall than precision,
when it is greater than 1, that means, recall is weighted more, when it is
smaller than 1, that means precision is weighted more.

Macro-average
 Precision and recall are usually calculated per decision class. Micro and
macro average are ways to aggregate the measures.
 Precision true positives over labeled positives; Recall, true positives over
actual positives. ((1), (2), (3), (4),(5)) represent the five data points.)
 Compute precision and recall per-class, and average: ex:

 Rare classes have the same impact as frequent classes (not ideal).
 Macro F1-Score is the harmonic mean of Macro-Precision and Macro-Recall.

Micro-average
 Micro averaging treats the entire set of data as an aggregate result, and calculates 1 metric
rather than k metrics that get averaged together.


 In micro averaging, we calculate one aggregate result for the entire data set (for precision and
recall and use these micro averaged precision and recall for the micro averaged F1).
 Micro-Average Precision and Recall are just the same values when there is one label, so is the
Micro Average F1-Score, and the accuracy.

How to find ^f ( x ) : A solution workflow
 Best outcome we can hope for: ^f ( x ) = f (x) for all x. Ideally, we would like ^f ( x ) such that a
loss between f(x) and ^f ( x ) is minimized, i.e., L( ^f ( x ) , f (x)) is small.
 The cost (average loss plus possibly a regularization term) in case of regression can be MSE
or MAE, computed over all values of x
 Problem 1 We do not have all values of x and f (x) (might not represent the whole population)
 Problem 2 We do not know how f (x) looks like (distribution)
 Compute loss on the data we have (empirical risk minimization) for MAE:



How to find ^f ( x )
 If ^f ( x ) = θx + c we assume a linear relationship


 For a more complex relationship, a polynomial function can also be used
 Choose a power p ^f ( x ) = c + θ1x + θ2x2 + . . . θpxp Higher p implies higher degree of
freedom/flexibility (and more fitted to the data -> risk of overfitting)



Changing p: overfitting / underfitting
$3.59
Get access to the full document:

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached


Also available in package deal

Reviews from verified buyers

Showing all reviews
10 months ago

3.0

1 reviews

5
0
4
0
3
1
2
0
1
0
Trustworthy reviews on Stuvia

All reviews are made by real Stuvia users after verified purchases.

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
bascrypto Tilburg University
Follow You need to be logged in order to follow users or courses
Sold
323
Member since
4 year
Number of followers
208
Documents
18
Last sold
3 days ago

4.2

49 reviews

5
25
4
13
3
7
2
2
1
2

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions