100% de satisfacción garantizada Inmediatamente disponible después del pago Tanto en línea como en PDF No estas atado a nada 4.2 TrustPilot
logo-home
Resumen

Summary Machine Learning

Puntuación
3.0
(1)
Vendido
8
Páginas
38
Subido en
05-12-2023
Escrito en
2023/2024

Summary of all the lectures of Machine Learning. It contains all the relevant material needed for the final exam.

Institución
Grado











Ups! No podemos cargar tu documento ahora. Inténtalo de nuevo o contacta con soporte.

Escuela, estudio y materia

Institución
Estudio
Grado

Información del documento

Subido en
5 de diciembre de 2023
Número de páginas
38
Escrito en
2023/2024
Tipo
Resumen

Temas

Vista previa del contenido

Summary Machine Learning

Lecture 1 – Introduction to Machine Learning

What is machine learning (ML) about?
 ML is about automation of problem solving.
 It is the study of computer algorithms that improve automatically through experience.
 Involves becoming better at a task T based on some experience E with respect to some
performance measure P
 Examples: spam detection, movie recommendation, speech recognition, credit risk analysis,
autonomous driving and medical diagnosis.

What does it involve?
 ML may involve a notion of generalization. Is it safe to assume that current observations can
be generalized to future observations?
 ML should be generalizable – should perform on unseen data / representative for real world
domain.
 Labeled data, objective, optimization algorithm (models), features/representations (columns),
and assumptions are some critical components.

Different types of learning
 Supervised learning: annotated/labelled dataset / ground truth
o Classification: discrete variable - predicting the price of a house
o Regression: continuous variable - spam detection
 Unsupervised learning: unlabeled dataset
o Clustering, association mining - customer segmentation, recommendation
 Semi supervised learning (only a portion of the data is labeled) - text classification
 Reinforcement learning: based on rewarding desired behaviors and/or punishing undesired
ones. (involves a feedback loop) - self driving car

Example - SPAM versus non-Spam
 Binary classification problem




Learning process
 Find examples of SPAM and non-SPAM
 Come up with a learning algorithm
 A learning algorithm infers rules from examples: If (A or B or C) and not D, then SPAM
 These rules can then be applied to new data (emails)

Learning algorithms

, See several different learning algorithms
 Implement simple 2-3 simple ones from scratch in Python
 Learn about Python libraries for ML (scikit-learn)
 How to apply them to real-world problems

Machine Learning – Examples
 Recognize handwritten numbers and letters
 Recognize faces in photos
 Determine whether text expresses positive, negative or no opinion
 Guess person’s age based on a sample of writing
 Flag suspicious credit-card transactions
 Recommend books and movies to users based on their own and others’ purchase history
 Recognize and label mentions of people’s or organization names in text

Types of learning problems: Regression
 Response: a (real) number
 Predict person’s age, predict price of a stock, predict student’s score on exam
 In regression, the response variable is predicted using a set of predictors that are believed to
have an influence on the response variable.

Types of learning problems: Binary classification
 Response: Yes/No answer
 Detect SPAM
 Predict polarity of product review: positive vs negative

Types of learning problems: Multiclass classification
Response: one of a finite set of options
 Classify newspaper article as
o politics, sports, science, technology, health, finance
 Detect species based on photo
o Passer domesticus, Calidris alba, Streptopelia decaocto, Corvus corax, …

Types of learning problems: Multilabel classification
Response: a finite set of Yes/No answers
 Assign songs to one or more genres
o rock, pop, metal
o hip-hop, rap
o jazz, blues
o rock, punk

Types of learning problems: Autonomous behavior
 Input: measurements from sensors – camera, microphone, radar, accelerometer,. . .
 Response: instructions for actuators – steering, accelerator, brake,

How well is the algorithm learning?
 Evaluation: Choose a baseline, choose a metric, compare!

, different tasks, different metrics

Predicting age – Regression
 Mean absolute error – the average (absolute) difference between true value and predicted
value (yn true value (ground truth), ^y n predicted value) - fails to punish large errors in
prediction as all errors are treated equally.


 Mean squared error – the average square of the difference between true value and predicted
value - more sensitive to outliers as the square amplifies the impact of large deviations.



Predicting spam - Classification


 Drawback: does not work well on imbalanced data. If the data is imbalanced the accuracy
will naturally be high.

Classification
Wrong classification
 False positive (FP) – Flagged as SPAM, but not non-SPAM
 False negative (FN) – Not flagged, but is SPAM
 False positives are a bigger issue for this problem! In the medical field this is the other way
around (False negative are the bigger issue). Minimizing one of the two depends thus on the
problem at hand.
Correct classification
 True positive (TP): Spam classified as spam
 True negative (TN): Not-spam classified as not-spam
Summarized in a confusion matrix (image on the right)
 Confusion matrix can be appended with more decision
classes.

Precision and Recall
 Metrics which focus on one kind of mistake. These are better for imbalanced data (together
with F1-score.
 Precision (positive predictive value (PPV)) – what fraction of flagged emails
were real SPAMs?
 Recall (sensitivity, hit rate, or true positive rate (TPR)) – what fraction of
real SPAMs were flagged?
 Specificity, selectivity or true negative rate (TNR) (usage not common)

Fβ-score
 F1 – score (F-measure): harmonic mean between precision and recall a kind
of average

,  Parameter β quantifies how much more we care about recall than precision,
when it is greater than 1, that means, recall is weighted more, when it is
smaller than 1, that means precision is weighted more.

Macro-average
 Precision and recall are usually calculated per decision class. Micro and
macro average are ways to aggregate the measures.
 Precision true positives over labeled positives; Recall, true positives over
actual positives. ((1), (2), (3), (4),(5)) represent the five data points.)
 Compute precision and recall per-class, and average: ex:

 Rare classes have the same impact as frequent classes (not ideal).
 Macro F1-Score is the harmonic mean of Macro-Precision and Macro-Recall.

Micro-average
 Micro averaging treats the entire set of data as an aggregate result, and calculates 1 metric
rather than k metrics that get averaged together.


 In micro averaging, we calculate one aggregate result for the entire data set (for precision and
recall and use these micro averaged precision and recall for the micro averaged F1).
 Micro-Average Precision and Recall are just the same values when there is one label, so is the
Micro Average F1-Score, and the accuracy.

How to find ^f ( x ) : A solution workflow
 Best outcome we can hope for: ^f ( x ) = f (x) for all x. Ideally, we would like ^f ( x ) such that a
loss between f(x) and ^f ( x ) is minimized, i.e., L( ^f ( x ) , f (x)) is small.
 The cost (average loss plus possibly a regularization term) in case of regression can be MSE
or MAE, computed over all values of x
 Problem 1 We do not have all values of x and f (x) (might not represent the whole population)
 Problem 2 We do not know how f (x) looks like (distribution)
 Compute loss on the data we have (empirical risk minimization) for MAE:



How to find ^f ( x )
 If ^f ( x ) = θx + c we assume a linear relationship


 For a more complex relationship, a polynomial function can also be used
 Choose a power p ^f ( x ) = c + θ1x + θ2x2 + . . . θpxp Higher p implies higher degree of
freedom/flexibility (and more fitted to the data -> risk of overfitting)



Changing p: overfitting / underfitting
$3.61
Accede al documento completo:

100% de satisfacción garantizada
Inmediatamente disponible después del pago
Tanto en línea como en PDF
No estas atado a nada


Documento también disponible en un lote

Reseñas de compradores verificados

Se muestran los comentarios
11 meses hace

3.0

1 reseñas

5
0
4
0
3
1
2
0
1
0
Reseñas confiables sobre Stuvia

Todas las reseñas las realizan usuarios reales de Stuvia después de compras verificadas.

Conoce al vendedor

Seller avatar
Los indicadores de reputación están sujetos a la cantidad de artículos vendidos por una tarifa y las reseñas que ha recibido por esos documentos. Hay tres niveles: Bronce, Plata y Oro. Cuanto mayor reputación, más podrás confiar en la calidad del trabajo del vendedor.
bascrypto Tilburg University
Seguir Necesitas iniciar sesión para seguir a otros usuarios o asignaturas
Vendido
323
Miembro desde
4 año
Número de seguidores
208
Documentos
18
Última venta
2 semanas hace

4.2

49 reseñas

5
25
4
13
3
7
2
2
1
2

Recientemente visto por ti

Por qué los estudiantes eligen Stuvia

Creado por compañeros estudiantes, verificado por reseñas

Calidad en la que puedes confiar: escrito por estudiantes que aprobaron y evaluado por otros que han usado estos resúmenes.

¿No estás satisfecho? Elige otro documento

¡No te preocupes! Puedes elegir directamente otro documento que se ajuste mejor a lo que buscas.

Paga como quieras, empieza a estudiar al instante

Sin suscripción, sin compromisos. Paga como estés acostumbrado con tarjeta de crédito y descarga tu documento PDF inmediatamente.

Student with book image

“Comprado, descargado y aprobado. Así de fácil puede ser.”

Alisha Student

Preguntas frecuentes