100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Class notes

Complete WEEK3 note: Machine Learning & Learning Algorithms(BM05BAM)

Rating
-
Sold
-
Pages
13
Uploaded on
12-03-2024
Written in
2023/2024

THIS IS A COMPLETE NOTE FROM ALL BOOKS + LECTURE! Save your time for internships, other courses by studying over this note! Are you a 1st/2nd year of Business Analytics Management student at RSM, who want to survive the block 2 Machine Learning module? Are you overwhelmed with 30 pages of reading every week with brand-new terms and formulas? If you are lost in where to start or if you are struggling to keep up due to the other courses, or if you are just willing to learn about Machine Learning, I got you covered. I successfully passed the course Machine Learning & Learning Algorithms at RSM with 7.6, WITHOUT A TECHNICAL BACKGROUND before this Master. So if you are from Non-tech bachelor program, this note will navigate the knowledge you should focus on to pass the exam and successfully complete assignments, and for people with some machine learning knowledge, this note will certainly make your life easier and gets you a booster to your grade.

Show more Read less
Institution
Course









Whoops! We can’t load your doc right now. Try again or contact support.

Connected book

Written for

Institution
Study
Course

Document information

Uploaded on
March 12, 2024
Number of pages
13
Written in
2023/2024
Type
Class notes
Professor(s)
Jason roos
Contains
All classes

Subjects

Content preview

4.4.2: Linear Discriminant analysis for p>1
Positive class: the one we care most about predicting

Two types of prediction in prediction
1. Soft prediction : phat = Pr(in pos.class) : expected value between 0 and 1.
a. Soft prediction is more fundamental: hard prediction is done from the
result of soft prediction.
2. Hard prediction: a particular class predicted by comparing the soft prediction phat
to a threshold value
a. hard prediction heavily depends on the decision threshold

Classification metrics
Binary classifier’s error types
1. False positive (e.g., individual who does not defaults to the default)
2. False negative (e.g., individual who defaults to the no default)

Confusion matrix address the interest of displaying the information of which of these
two types of errors are made.
Despite that the overall accuracy might be satisfactory, individual error for FP/FN could
be unacceptably high.

Statistics that reflects the precision and recall :
F1 : harmonic mean of precision and recall
FM : geometric mean of precision and recall.
They both ignore true negatives. : if there is class imbalance, they depend on the positive
cases, so might perform worse in some cases.

Alternatives: correlation coefficients
Correlation between true and predicted class :


Correlation does not explicitly measure prediction accuracy. It only captures
the strength and direction of a linear relationship.

Cohen’s K: 0 is hit when you expect that TN and TP are at the same rate as chance.

, Comparisons of the 4 metrics

Class-specific performance
Sensitivity /recall/TPR : Percentage of true positive that are predicted as positive.
Specificity / True negative rate: Percentage of true negatives that are predicted as
negative.
Precision/Positive predicted value : different from the first two. Proportion of the
positive classified items that are actually positive.

When they are used as pairs…
Precision + recall is useful when the focus is on the positive class
(they ignore TN) ; it does not directly assess predictions for the
negative class.
Recall + specificity treats both positives and negatives as important.

The Bayes classifier works by assigning an observation to the class for which the
posterior probability pk(X) is greatest
- In binary classifier, a value will be assigned to the positive class if
Pr(positive = yes |X=x) >0.5
- This threshold is adjustable.
o Tidymodel uses the default of 0.5 and it is hard to adjust the value
The Bayes classifier will yield the smallest possible total number of misclassified
observations, regardless of the class from which the errors stem.

Thresholds and class imbalance
There is a trade off between the TP and TN, and FP and FN, depends on the threshold
value for the confusion matrix.
By varying the decision threshold, we determine the elements of the confusion matrix.
- The lower the threshold, the more observations classified to the positive class.
$15.67
Get access to the full document:

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached


Also available in package deal

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
ArisMaya Erasmus Universiteit Rotterdam
Follow You need to be logged in order to follow users or courses
Sold
49
Member since
4 year
Number of followers
30
Documents
20
Last sold
2 months ago
Let's Pass Together!

4.0

1 reviews

5
0
4
1
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions