100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Summary

[ Machine Learning ] Samenvatting | Lecture Notes | Summary | 2019

Rating
4,3
(4)
Sold
36
Pages
29
Uploaded on
18-03-2019
Written in
2018/2019

These are lecture notes/summary of Machine Learning (specifically lecture 1 - 10) from the study Artificial Intelligence/Lifestyle Informatics/Computer Science/IMM/etc. at the VU Amsterdam (2019). Everything marked in yellow is extra important, everything that's blue is a given example.

Show more Read less
Institution
Course










Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Study
Course

Document information

Uploaded on
March 18, 2019
File latest updated on
March 22, 2019
Number of pages
29
Written in
2018/2019
Type
Summary

Subjects

Content preview

MACHINE LEARNING SV 2019

HC1 - INTRODUCTION

WHAT IS MACHINE LEARNING?

The problem of induction (= knowledge):
• Deductive reasoning: “all men are mortal, Socrates is a man, therefore Socrates is mortal” =
reasoning à discrete, unambiguous, provable, known rules
• Inductive reasoning: “the sun has risen in the east every day of my life, so it’ll do so again
tomorrow” à fuzzy, ambiguous, experimental, unknown rules

Machine learning = provides systems the ability to automatically learn and improve from
experience without being explicitly programmed. Uses offline learning (train your model once).
1. Where do we use ML
o inside other software (unlock phone with face/using voice commands)
o in analytics/data mining (find typical clusters of users/predict spikes web traffic)
o in science/statistics (if any model can predicts A from B, there must be some relation)
2. What makes a good ML problem?
o We can’t solve it explicitly, but approximate solutions are fine (recommending movies)
o Limited reliability, predictability, interpretability is fine
3. What problems does ML solve?
o Explicit solutions are expensive. Best solution may depend on the circumstances or
change over time.
o We don’t know exactly how our actions influence the
world and can’t fully observe it à need to guess

Intelligent agent
• Online learning: acting and learning at the same time
• Reinforcement learning: online learning in a world based on
delayed feedback

Offline learning = separate LEARNING and ACTING (most of the course)
1. Take a fixed dataset of examples (aka instances) à train a model to learn from these
examples à test the model to see if it works

SUPERVISED MACHINE LEARNING TASKS

Supervised Unsupervised
Give the model examples of input and output Only inputs provided
(what you want it to do)
Learn to predict the output for an unseen input Find any pattern that explains something about the data
more difficult, but can be useful because labeling data is
very expensive
linear models, tree models and kNN models clustering, density estimation and generative modeling




1

,Supervised learning = give the model examples of what you want it to do. Types of models: linear
models, tree models and kNN models.

The two spaces of machine learning:
1. Model space: you search this space for a good model
to fit your data (every point is a model)
a. Discrete: tree models
b. Continuous: between every two models,
there is always another model, no matter
how close they are together
2. Feature space: plot/look at your data

CLASSIFICATION
Classification = assign a class to each example.
1. Create a dataset: An example of a problem: ham/spam emails.
o Feature = things you measure about your instances (how many
times ‘viagra’)
o label = what we try to predict (spam/ham)
o instance = the examples you give to the model (one email)
2. Pass that dataset to learner algorithm à the algorithm comes up with a model (aka
classifier). It makes a guess whether it’s spam or not.
3. Examples of classification:
o Optical Character Recognition (OCR): reading handwritten text/digits
o Playing chess
o Self-driving car, such as ALVINN (1995)
4. Examples of classifiers:
o Linear classifier
o Decision tree classifier: watch out for overfitting
o K-Nearest Neighbors (lazy classifier): doesn’t learn/build model, just memorizes the
data à when new example, it looks at the k nearest instances in your dataset à
takes majority vote
5. Variations:
o Features: usually numerical/categorical
o Binary classification: two classes (male/female)
o Multiclass classification: more than two classes
o Multilabel classification: > two classes and none, some or all of them may be true
o Class probabilities/scores: the classifier reports a probability for each class

Loss function = the performance of the model on the data; the lower the better!! (The bigger the
loss function, the worse the model is!). E.g.: for classification the number of misclassified examples.

Overfitting = if your training loss goes down but your validation loss stays the same. Split your test
and training data. The aim is to minimize the loss on your test data (NOT on the training data).

REGRESSION
Regression = assign a number to each example (instance).
• Loss function for regression, a.k.a. the mean-squared-erros (MSE) loss:
a. P: stands for the parameters that define the line.
b. Residuals: We take the difference between the model prediction and the target
value from the data. We square, and then sum all residuals. The closer to zero, the
better. (it is squared, because otherwise the negatives might cancel out).


2

, UNSUPERVISED MACHINE LEARNING TASKS

Unsupervised learning: all you have is the data as is, without the examples. Much more difficult but
can be useful because labeling data is very expensive.
• Clustering: You cluster your data (subsets), such as k-means clustered
• Density estimation: Given your data, which values/examples are more likely than other
examples? (à modelling the probability distribution behind your data)
o Discrete feature space: The model produces a probability (sum of all answers over
the whole feature space should be 1)
o Continuous feature space: model produces numeric feature, and the answer should
be a probability density (and all answers should integrate to 1).
• Generative modeling: Building a model from which you can sample new examples

WHAT ISN’T MACHINE LEARNING?

OTHER FIELDS THAT ARE RELATED TO ML / NOT ML
• AI: automated reasoning, planning
• Data science: gathering data, harmonizing data,
interpreting data
• Data mining: finding common clickstreams in web
logs/fraud in transactions
o More ML than DM: spam classification, prediction
stock prices, learning to control a robot
• Information retrieval: building search engines
• Statistics: analyzing research results, experiment design,
courtroom evidence
o More ML than Stats: spam classification, movie
recommendation
o Difference between ML and S: Statistics aim to get at the truth, whereas machine
learning tries to come up with something that works regardless of whether it’s true!!
§ “the machine learning approach measuring models purely for predictive
accuracy on a large test set, has a lot of benefits and makes the business of
statistics a lot simple and more effective.”
o Deep learning is a subfield of ML


OFFLINE MACHINE LEARNING BASIC RECIPE:
1. Abstract (part of) your problem to a standard task.
o Classification, Regression, Clustering, Density estimation, Generative Modeling
2. Choose your instances and their features. For supervised learning, choose a target.
3. Choose your model class. Linear models, Decision Trees, kNN (choose how many surrounding points)
4. Search for a good model. Choose a loss function, choose a search method to minimize the loss.




3
R176,38
Get access to the full document:
Purchased by 36 students

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached

Reviews from verified buyers

Showing all 4 reviews
4 year ago

4 year ago

5 year ago

6 year ago

4,3

4 reviews

5
2
4
1
3
1
2
0
1
0
Trustworthy reviews on Stuvia

All reviews are made by real Stuvia users after verified purchases.

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
gg2000 Vrije Universiteit Amsterdam
Follow You need to be logged in order to follow users or courses
Sold
97
Member since
10 year
Number of followers
85
Documents
16
Last sold
2 year ago

3,8

16 reviews

5
3
4
10
3
1
2
1
1
1

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their exams and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can immediately select a different document that better matches what you need.

Pay how you prefer, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card or EFT and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions