100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Summary

Summary Learning Partial Exam 2

Rating
-
Sold
2
Pages
10
Uploaded on
18-04-2020
Written in
2016/2017

This is a summary of the second partial exam of the course Learning at the University of Amsterdam. The summary is in order of the lectures.

Institution
Course









Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Study
Course

Document information

Uploaded on
April 18, 2020
Number of pages
10
Written in
2016/2017
Type
Summary

Subjects

Content preview

Leren Samenvatting 2
Decision Trees
Decision tree representation:
 Each internal node tests an attribute
 Each branch corresponds to an attribute value
 Each leaf node assigns a classification

Top-Down Induction of Decision trees, main loop:
1. A  the “best” decision attribute for the next node
2. Assign A as decision attribute for a node
3. For each value of A, create a new descendant of the node
4. Sort training examples to leaf nodes
5. If the training examples are perfectly classified, then STOP, else iterate over new leaf
nodes

Entropy
Entropy(S) is the expected number of bits needed to encode a class (+ or -) of a randomly
drawn member of S (under the optimal, shortest-length code). Entropy is the degree of
uncertainty. Binary variance = p(1-p)

, Information Gain
Gain(S,A) is the expected reduction in entropy due to sorting on A.




The information gain is higher for the classifier Humidity, so that is the best classifier.

ID3 Algorithm
There is noise in the data, so we need to make sure that that isn’t used in the model, because
it couldn’t generalize if that were the case.
 Preference for short trees, and for those trees with high information gain attributes
near the root.
 Bias is a preference for some hypotheses, rather than a restriction of hypothesis space
H.
 Occam’s razor: prefer the shortest hypothesis that fits the data:
o Arguments in favor:
 Fewer short hypotheses, than long hypotheses
 A short hypothesis that fits data is unlikely to be a coincidence
 A long hypothesis that fits data might be a coincidence
o Arguments opposed:
 There are many ways to define small sets of hypotheses
 E.g. all trees with a prime number of nodes that use attributes
beginning “Z”
$6.03
Get access to the full document:

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached


Document also available in package deal

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
kimgouweleeuw Universiteit Twente
Follow You need to be logged in order to follow users or courses
Sold
86
Member since
5 year
Number of followers
59
Documents
34
Last sold
1 year ago

3.7

7 reviews

5
1
4
3
3
3
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their exams and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can immediately select a different document that better matches what you need.

Pay how you prefer, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card or EFT and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions