100% tevredenheidsgarantie Direct beschikbaar na je betaling Lees online óf als PDF Geen vaste maandelijkse kosten 4,6 TrustPilot
logo-home
Samenvatting

Summary Exam notes of Machine Learning & Learning Algorithms (RSM Business Analytics & Management)

Beoordeling
-
Verkocht
-
Pagina's
31
Geüpload op
21-01-2026
Geschreven in
2025/2026

Summary of both book and lecture note with summarisation of key parameters of each machine learning model which can help buyer to prepare well for the final exams for machine learning related courses. I have received an excellent grade of 8.2 for the Machine Learning & Learning Algorithms exam with the help of the note.

Meer zien Lees minder
Instelling
Vak










Oeps! We kunnen je document nu niet laden. Probeer het nog eens of neem contact op met support.

Geschreven voor

Instelling
Studie
Vak

Documentinformatie

Geüpload op
21 januari 2026
Aantal pagina's
31
Geschreven in
2025/2026
Type
Samenvatting

Onderwerpen

Voorbeeld van de inhoud

Function Approximation View of Statistical Learning
In supervised learning, the response follows the model:

Y = f (X) + ε,

where:

• f (X) is the true unknown function we want to approximate.

• ε is irreducible noise with mean zero.

Goal of statistical learning: Produce an estimator fˆ(X) that approximates f (X) well.


Loss Function
Purpose
A loss function quantifies how bad a prediction is. Choosing a loss function implicitly
defines the prediction function f (x) that minimizes expected loss.

L2 Loss (Squared Error Loss)


L2 (Y, f (X)) = (Y − f (X))2

Optimal predictor:
f (x) = M ean[Y | X = x]
Properties:

• Dominant in regression.

• Sensitive to large errors.

• Produces the conditional mean.

L1 Loss (Absolute Error Loss)


L1 (Y, f (X)) = |Y − f (X)|

Optimal predictor:
f (x) = M edian(Y | X = x)
Properties:

• More robust to outliers.

• Leads to the conditional median.




1

,Loss 0–1 (Classification Loss)


L0−1 (Y, f (X)) = 1[Y ̸= f (X)]

Optimal predictor:
f (x) = M ode(Y | X = x)
Properties:
• Used in classification.

• Minimizing 0–1 loss yields the Bayes classifier.


Bias Variance Trade-off
Under L2 loss, expected test error decomposes into:

E[(Y − fˆ(X))2 ] = Var[fˆ(X)] + (Bias[fˆ(X)])2 + Var[ε]
| {z } | {z } | {z } | {z }
T estError V ariance SquaredBias IrreducibleError


Bias
• Error that occurs when a model is too simple to capture the true patterns in the
data

• High bias: The model oversimplifies → misses patterns and underfits the
data.

• Low bias: The model captures patterns well and is closer to the true values
→ Overfits

Variance
• How much a model’s predictions change when it’s trained on different data.

• High variance: The model is too sensitive to small changes ⇒ overfitting

• Low variance: The model is more stable but might miss some patterns →
Underfits

Reducible Error


E[(f (X) − fˆ(X))2 ] = Var[fˆ(X)] + (Bias[fˆ(X)])2
| {z } | {z } | {z }
ReducibleError V ariance SquaredBias




• Origin: Inability to perfectly estimate the true function f(X)

• Reducible error = bias² + variance

• Reason: Use an approximation (a model) instead of the true function

2

, • How to reduce bias:
– Use more complex models
– Using more relevant features
– Reduce regularization to allow the model more flexibility in fitting
• How to reduce variance:
– Simplify the model
– Increase training data
– Apply regularization to constrain model complexity
– Use ensemble methods

Irreducible Error
• Origin: Random noise term ε
• Even if we know the true function f (X) → ε still cause variability in Y
• Reason: ε is independent of X

Trade-off
• Increasing model flexibility decreases bias but increases variance.
• Goal: choose a model complexity (λ, number of features, neighbors k, etc.) that
minimizes test error, not training error.


Training Error vs Test Error & Generalization Error
Training Error
• Computed on data used to fit the model.
• Typically underestimates true error.
• Flexible models can make training error nearly zero.

Test Error
• Error on previously unseen data.
• Used to estimate real-world performance.

Generalization Error
• True population-level predictive error.
• Not observable directly.
• Test error is an estimate of generalization error.
Training error is not reliable because the model is optimized to minimize it.

3
$19.46
Krijg toegang tot het volledige document:

100% tevredenheidsgarantie
Direct beschikbaar na je betaling
Lees online óf als PDF
Geen vaste maandelijkse kosten

Maak kennis met de verkoper
Seller avatar
karenhuang920905

Ook beschikbaar in voordeelbundel

Maak kennis met de verkoper

Seller avatar
karenhuang920905 Erasmus Universiteit Rotterdam
Volgen Je moet ingelogd zijn om studenten of vakken te kunnen volgen
Verkocht
4
Lid sinds
2 jaar
Aantal volgers
0
Documenten
6
Laatst verkocht
6 dagen geleden

0.0

0 beoordelingen

5
0
4
0
3
0
2
0
1
0

Recent door jou bekeken

Waarom studenten kiezen voor Stuvia

Gemaakt door medestudenten, geverifieerd door reviews

Kwaliteit die je kunt vertrouwen: geschreven door studenten die slaagden en beoordeeld door anderen die dit document gebruikten.

Niet tevreden? Kies een ander document

Geen zorgen! Je kunt voor hetzelfde geld direct een ander document kiezen dat beter past bij wat je zoekt.

Betaal zoals je wilt, start meteen met leren

Geen abonnement, geen verplichtingen. Betaal zoals je gewend bent via iDeal of creditcard en download je PDF-document meteen.

Student with book image

“Gekocht, gedownload en geslaagd. Zo makkelijk kan het dus zijn.”

Alisha Student

Veelgestelde vragen