CAI 4104
Machine Learning Engineering
Midterms Exam Review (Qns & Ans)
2025
1. Which regularization technique adds a penalty proportional to
the absolute value of the coefficients in a model?
- A. L1 regularization (Lasso)
- B. L2 regularization (Ridge)
- C. Dropout
- D. Early stopping
ANS: A
Rationale: L1 regularization adds a penalty term based on the
sum of the absolute values of the coefficients, encouraging
sparsity.
©2025
,2. Which of the following methods is most suitable for handling
imbalanced datasets in supervised learning?
- A. Oversampling the minority class
- B. Removing outliers
- C. Applying PCA for dimensionality reduction
- D. Increasing regularization strength
ANS: A
Rationale: Oversampling the minority class (e.g., using
SMOTE) helps balance class distributions in imbalanced datasets.
3. What is the primary purpose of dropout in deep neural
networks?
- A. To reduce computation time
- B. To improve interpretability
- C. To prevent overfitting
- D. To enhance activation functions
ANS: C
Rationale: Dropout randomly deactivates neurons during
training to reduce overfitting and encourage generalization.
©2025
, 4. Which of the following optimization algorithms is most likely
to escape saddle points in non-convex loss landscapes?
- A. Stochastic Gradient Descent (SGD)
- B. Momentum-based Gradient Descent
- C. RMSprop
- D. Adam
ANS: D
Rationale: Adam adapts learning rates for different
parameters, making it more robust to saddle points compared to
SGD.
5. Which metric is most appropriate for evaluating the
performance of a binary classification model on imbalanced data?
- A. Accuracy
- B. Precision-Recall AUC
- C. Mean Squared Error (MSE)
- D. R-squared
ANS: B
Rationale: Precision-Recall AUC is suitable for imbalanced
datasets as it focuses on the minority class's performance.
---
©2025
Machine Learning Engineering
Midterms Exam Review (Qns & Ans)
2025
1. Which regularization technique adds a penalty proportional to
the absolute value of the coefficients in a model?
- A. L1 regularization (Lasso)
- B. L2 regularization (Ridge)
- C. Dropout
- D. Early stopping
ANS: A
Rationale: L1 regularization adds a penalty term based on the
sum of the absolute values of the coefficients, encouraging
sparsity.
©2025
,2. Which of the following methods is most suitable for handling
imbalanced datasets in supervised learning?
- A. Oversampling the minority class
- B. Removing outliers
- C. Applying PCA for dimensionality reduction
- D. Increasing regularization strength
ANS: A
Rationale: Oversampling the minority class (e.g., using
SMOTE) helps balance class distributions in imbalanced datasets.
3. What is the primary purpose of dropout in deep neural
networks?
- A. To reduce computation time
- B. To improve interpretability
- C. To prevent overfitting
- D. To enhance activation functions
ANS: C
Rationale: Dropout randomly deactivates neurons during
training to reduce overfitting and encourage generalization.
©2025
, 4. Which of the following optimization algorithms is most likely
to escape saddle points in non-convex loss landscapes?
- A. Stochastic Gradient Descent (SGD)
- B. Momentum-based Gradient Descent
- C. RMSprop
- D. Adam
ANS: D
Rationale: Adam adapts learning rates for different
parameters, making it more robust to saddle points compared to
SGD.
5. Which metric is most appropriate for evaluating the
performance of a binary classification model on imbalanced data?
- A. Accuracy
- B. Precision-Recall AUC
- C. Mean Squared Error (MSE)
- D. R-squared
ANS: B
Rationale: Precision-Recall AUC is suitable for imbalanced
datasets as it focuses on the minority class's performance.
---
©2025