Machine Learning
3.0 Credits
Objective Assessment Review (Qns &
Ans)
2025
©2025
, Question 1:
Which ensemble method trains base learners sequentially, where each
subsequent model focuses on correcting the errors of its predecessors?
- A. Bagging
- B. Boosting
- C. Stacking
- D. Voting
Correct ANS: B. Boosting
Rationale:
Boosting techniques (e.g., AdaBoost, Gradient Boosting) sequentially
build models where each new model attempts to correct the mistakes
made by the previous ones. This often results in a strong predictive
model that reduces bias.
---
Question 2:
In support vector machines, which technique enables the algorithm to
operate in a high-dimensional feature space without explicitly computing
the transformation?
- A. Dimensionality reduction
- B. Kernel trick
- C. Regularization
- D. Cross-validation
Correct ANS: B. Kernel trick
Rationale:
The kernel trick allows SVMs to compute the inner products in a high-
dimensional (possibly infinite-dimensional) feature space using kernel
functions, without explicitly mapping the data into that space, thus
efficiently handling non-linear decision boundaries.
---
©2025
, Question 3:
For evaluating a binary classification model on an imbalanced dataset,
which metric is considered more informative than overall accuracy?
- A. F1-score
- B. Mean Absolute Error
- C. R-squared
- D. Root Mean Squared Error
Correct ANS: A. F1-score
Rationale:
The F1-score is the harmonic mean of precision and recall, emphasizing
the model’s performance on the minority class. This makes it a more
relevant metric than accuracy in imbalanced settings.
---
Question 4:
Which optimization algorithm, popular in training deep neural networks,
adapts learning rates based on estimates of first and second moments of
the gradients?
- A. Momentum
- B. Adagrad
- C. Adam
- D. Stochastic Gradient Descent
Correct ANS: C. Adam
Rationale:
Adam (Adaptive Moment Estimation) efficiently adapts the learning rate
by incorporating both the mean and the uncentered variance of the
gradients, leading to faster convergence especially in high-dimensional
parameter spaces.
---
©2025
, Question 5:
Which regularization technique in neural networks randomly drops
connections between neurons during training to prevent overfitting?
- A. L1 regularization
- B. Batch normalization
- C. Data augmentation
- D. Dropout
Correct ANS: D. Dropout
Rationale:
Dropout is a widely used regularization method that randomly “drops
out” a fraction of neurons during training. This prevents the network
from becoming overly reliant on any single neuron, thereby reducing
overfitting.
---
Question 6:
Which dimensionality reduction algorithm transforms data into a new
coordinate system such that the greatest variance comes to lie on the
first coordinate?
- A. t-SNE
- B. Linear Discriminant Analysis (LDA)
- C. Principal Component Analysis (PCA)
- D. Independent Component Analysis (ICA)
Correct ANS: C. Principal Component Analysis (PCA)
Rationale:
PCA transforms the data into orthogonal components ordered by the
amount of variance they capture. It is widely used to reduce
dimensionality while retaining most of the information in the data.
---
Question 7:
©2025