Garantie de satisfaction à 100% Disponible immédiatement après paiement En ligne et en PDF Tu n'es attaché à rien 4.2 TrustPilot
logo-home
Resume

Summary Machine Learning (880083-M-6)

Note
-
Vendu
9
Pages
93
Publié le
14-06-2021
Écrit en
2020/2021

Full course with all notes and explanations (excluding python part).

Établissement
Cours











Oups ! Impossible de charger votre document. Réessayez ou contactez le support.

École, étude et sujet

Établissement
Cours
Cours

Infos sur le Document

Publié le
14 juin 2021
Fichier mis à jour le
14 juin 2021
Nombre de pages
93
Écrit en
2020/2021
Type
Resume

Sujets

Aperçu du contenu

Machine Learning
Cicek Guven, Itir Onal



This course covers four ML algorithms: Decision Tree, Perceptron, Logistic Regression and Neural
Networks. The first chapter is about Machine Learning in general. The fourth chapter covers the
optimization problem (decoupled from a model), which is applicable to any of the algorithms. The fifth
chapter discusses the optimal representation of data in a model.



LECTURE 1 INTRODUCTION ....................................................................................................................... 3

1.1 MACHINE LEARNING .......................................................................................................................................... 3
1.2 TYPES OF LEARNING PROBLEMS ............................................................................................................................ 4
1.3 EVALUATION: HOW WELL IS THE ALGORITHM LEARNING? .......................................................................................... 5

LECTURE 2 DECISION TREE......................................................................................................................... 7

2.1 LEARNING RULES WHILE PLAYING A GAME............................................................................................................... 7
2.2 HOW TO BUILD A DECISION TREE ........................................................................................................................... 8
2.3 EFFICIENCY, SPEED AND DEPTH OF A DT ............................................................................................................... 14
2.4 IMPURITY MEASURES ........................................................................................................................................ 16
2.5 HOW CAN WE USE DECISION TREES FOR REGRESSION?............................................................................................ 17
2.6 ADVANTAGES AND DISADVANTAGES OF DECISION TREES .......................................................................................... 17

LECTURE 3 PERCEPTRON ......................................................................................................................... 18

3.1 WHAT IS A PERCEPTRON?.................................................................................................................................. 18
3.2 ALGORITHM.................................................................................................................................................... 19
3.3 POSSIBLE STUMBLING BLOCKS ............................................................................................................................ 25

LECTURE 4 GRADIENT DESCENT ............................................................................................................... 30

4.1 INTRODUCTION ............................................................................................................................................... 30
4.2 ERROR AND OPTIMIZATION ................................................................................................................................ 31
4.3 SLOPE, DERIVATIVE AND GRADIENT...................................................................................................................... 33
4.4 STOCHASTIC GRADIENT DESCENT (SGD)............................................................................................................... 36

LECTURE 5 REPRESENTATION .................................................................................................................. 38

5.1 OVERVIEW OF ML PIPELINE ............................................................................................................................... 38
5.2 FEATURE ENGINEERING ..................................................................................................................................... 38
5.3 DOMAIN DEPENDENCE ...................................................................................................................................... 39
5.4 CHOOSING FEATURES ....................................................................................................................................... 40
5.5 TEXT CLASSIFICATION........................................................................................................................................ 46
5.6 IMAGE CLASSIFICATION ..................................................................................................................................... 49
5.7 FEATURE ABLATION ANALYSIS ............................................................................................................................. 52

,LECTURE 6 LOGISTIC REGRESSION............................................................................................................ 53

6.1 RECAP OF UPDATE RULE OF (S)GD ...................................................................................................................... 53
6.2 LOSS FUNCTION FOR CLASSIFICATION ................................................................................................................... 55
6.3 COST FUNCTION OF LOGISTIC REGRESSION ............................................................................................................ 63
6.4 SGD FOR THE LOSS FUNCTION ............................................................................................................................ 64
6.5 SUMMARY LINEAR REGRESSION VS. LOGISTIC REGRESSION ....................................................................................... 66
6.6 HOW TO CONTROL (OVER)FITTING? .................................................................................................................... 67

LECTURE 7 NEURAL NETS......................................................................................................................... 71

7.1 RECAP ........................................................................................................................................................... 71
7.2 THE BRAIN ...................................................................................................................................................... 73
7.3 FEED-FORWARD NEURAL NETWORK (A.K.A. MULTI-LAYER PERCEPTRON).................................................................. 74
7.4 NEURAL NETWORK PREDICTION.......................................................................................................................... 77
7.5 EX: REPRESENTING XOR .................................................................................................................................. 78
7.6 COST FUNCTIONS ............................................................................................................................................. 80
7.7 TRAINING THE NEURAL NETWORK ...................................................................................................................... 82
7.8 SUMMARY OF (ARTIFICIAL) NEURAL NETWORKS ..................................................................................................... 85
7.9 SPECIAL TYPES OF NEURAL NETWORKS ................................................................................................................ 86

,Lecture 1 Introduction


1.1 Machine Learning

Machine Learning (ML) is the study of computer algorithms that improve automatically through
experience. It involves becoming better at a task (T), based on some experience (E) with respect to some
performance measure (P).


1.1.1 Learning process

1) Find examples of labels/experiences.
2) Come up with a learning algorithm, which infers rules from examples (training set).
3) Applied the rules to new data.


1.1.2 Examples

- Filter email: If (A or B or C) and not D, then “spam”.
- Recognize handwritten numbers and letters.
- Recognize faces in photos.
- Determine whether text expresses positive, negative or no opinion.
- Guess a person’s age based on a sample of writing.
- Flag suspicious credit-card transactions.
- Recommend books and movies to users based on their own and others’ purchase history.
- Recognize and label mentions of people’s or organization names in text.

ML is not meant for random guessing, like predicting the number when rolling some dice. It studies
algorithms that learn from examples.

, 1.2 Types of learning problems

Type Input Response Example

Regression A (real) number predict person’s age, predict price of a stock,
predict student’s score on exam

Binary classification YES/NO answer (condition being there detect SPAM, predict polarity of product review:
or not there) positive vs negative

Multiclass One of a finite set of options detect species based on photo, classify newspaper
classification article as <politics> <sports> …

Multilabel A finite set of YES/NO answers assign songs to one or more genres (rock – pop –
classification metal, hip-hop – rap)

Ranking Object ordered according to relevance rank web pages in response to user query, predict
student’s preference for courses in a program

Sequence labeling a sequence of elements a corresponding sequence of labels label words in a sentence with their syntactic
(ex. words) category (noun – adverb – verb)

Sequence-to- a sequence of elements sequence of other elements (possibly translations (“My name is Penelope” → “Me
sequence modeling (ex. words) different length, possibly elements from llamo Penélope”), computer-generated subtitles
different sets)

Autonomous measurements from instructions for actuators (steering, self-driving car
behavior sensors (microphone, accelerator, brake …)
accelerometer …)
€9,99
Accéder à l'intégralité du document:

Garantie de satisfaction à 100%
Disponible immédiatement après paiement
En ligne et en PDF
Tu n'es attaché à rien

Faites connaissance avec le vendeur

Seller avatar
Les scores de réputation sont basés sur le nombre de documents qu'un vendeur a vendus contre paiement ainsi que sur les avis qu'il a reçu pour ces documents. Il y a trois niveaux: Bronze, Argent et Or. Plus la réputation est bonne, plus vous pouvez faire confiance sur la qualité du travail des vendeurs.
clairevanroey Universiteit Antwerpen
S'abonner Vous devez être connecté afin de suivre les étudiants ou les cours
Vendu
119
Membre depuis
8 année
Nombre de followers
96
Documents
32
Dernière vente
11 mois de cela

3,1

13 revues

5
3
4
4
3
0
2
3
1
3

Récemment consulté par vous

Pourquoi les étudiants choisissent Stuvia

Créé par d'autres étudiants, vérifié par les avis

Une qualité sur laquelle compter : rédigé par des étudiants qui ont réussi et évalué par d'autres qui ont utilisé ce document.

Le document ne convient pas ? Choisis un autre document

Aucun souci ! Tu peux sélectionner directement un autre document qui correspond mieux à ce que tu cherches.

Paye comme tu veux, apprends aussitôt

Aucun abonnement, aucun engagement. Paye selon tes habitudes par carte de crédit et télécharge ton document PDF instantanément.

Student with book image

“Acheté, téléchargé et réussi. C'est aussi simple que ça.”

Alisha Student

Foire aux questions