Adaboost Algorithm Questions and Answers

This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “Ensemble Learning – Adaboost”.

1. AdaBoost is an algorithm that has access to a weak learner and finds a hypothesis with a low empirical risk.
a) True
b) False
View Answer

Answer: a
Explanation: AdaBoost (Adaptive Boosting) is an algorithm that has access to a weak learner and finds a hypothesis with a low empirical risk. Each iteration of AdaBoost involves O(m) operations as well as a single call to the weak learner. Therefore, if the weak learner can be implemented efficiently, then the total training process will be efficient.

2. Which of the following statements is not true about AdaBoost?
a) The boosting process proceeds in a sequence of consecutive rounds
b) In each round t, the weak learner is assumed to return a weak hypothesis ht
c) The output of AdaBoost algorithm is a weak classifier
d) It assigns a weight to the weak hypothesis that is inversely proportional to the error of the weak hypothesis
View Answer

Answer: c
Explanation: The output of the AdaBoost algorithm is not a weak classifier but is a strong classifier that is based on a weighted sum of all the weak hypotheses. The boosting process proceeds in a sequence of consecutive rounds. So in each round t, the weak learner is assumed to return a weak hypothesis ht and it assigns a weight to ht that is inversely proportional to the error of ht.

3. AdaBoost runs in polynomial time.
a) False
b) True
View Answer

Answer: b
Explanation: AdaBoost runs in polynomial time and does not require defining a large number of hyper parameters. Each iteration of AdaBoost involves O (m) operations as well as a single call to the weak learner. So overall running time is polynomial in m.
advertisement
advertisement

4. The basic functioning of the AdaBoost algorithm is to maintain a weight distribution over the data points.
a) True
b) False
View Answer

Answer: a
Explanation: The basic functioning of the algorithm is to maintain a weight distribution d, over data points. A weak learner, f(k) is trained on this weighted data. And the (weighted) error rate of f(k) is used to determine the adaptive parameter α, which controls how “important” a weak learner, f(k) is.

5. The success of AdaBoost is due to its property of increasing the margin.
a) False
b) True
View Answer

Answer: b
Explanation: The success of AdaBoost is due to its property of increasing the margin. In practice we observe that running boosting for many rounds does not overfit in most cases and margin is a solution for it. The margins can be thought of as a measure of how confident a classifier is about, how it labels each point, and one would hypothetically desire to produce a classifier with margins as large as possible.

6. Which of the following statements is true about AdaBoost?
a) It is generally more prone to overfitting.
b) It improves classification accuracy.
c) It is particularly prone to overfitting on noisy datasets.
d) Complexity of the weak learner is important in AdaBoost.
View Answer

Answer: a
Explanation: AdaBoost is generally not more prone to overfitting but is less prone to overfitting. And it is prone to overfitting on noisy datasets. If you use very simple weak learners, then the algorithms are much less prone to overfitting and it improves classification accuracy. So Complexity of the weak learner is important in AdaBoost.

7. Which of the following statements is true about the working of AdaBoost?
a) It starts with equal weights and re – weighting will be done.
b) It starts with unequal weights and re – weighting will be done.
c) It starts with unequal weights and random sampling.
d) It starts with equal weights and random sampling.
View Answer

Answer: d
Explanation: AdaBoost starts with equal weights and random sampling. It starts by predicting the original dataset and gives equal weights to each observation. So in the first step of AdaBoost each sample has an identical weight that indicates how important it is regarding the classification.
advertisement

8. AdaBoost is a parallel ensemble method.
a) True
b) False
View Answer

Answer: b
Explanation: AdaBoost is not a parallel ensemble method. It is a sequential ensemble method, where the base learners are generated sequentially. The boosting process proceeds in a sequence of consecutive rounds.

9. Given three training instances with weights 0.5, 0.2, 0.04. The predicted values are 1, 1, and – 1. The actual output variables in the instances are – 1, 1, and 1. And the terror would be 1, 0, and 1. What is the misclassification rate?
a) 0.71
b) 0.65
c) 0.73
d) 0.5
View Answer

Answer: c
Explanation: Misclassification rate or error = sum (w (i) * terror (i)) / sum (w)
= (0.5 * 1 + 0.2 * 0 + 0.04 * 1) / (0.5 + 0.2 + 0.04)
= (0.5 + 0 + 0.04) / 0.74
= 0.54 / 0.74
= 0.73
advertisement

10. AdaBoost is sensitive to outliers.
a) False
b) True
View Answer

Answer: b
Explanation: AdaBoost is sensitive to outliers or label noise and the outliers are tending to get misclassified. As the number of iterations increases, the weights corresponding to outlier points can become very large. And the subsequent classifiers are trying to classify these outlier points correctly.

11. Consider the two instances having errors 0.4, 0.5. Then what will be the weights of the classifier for these two instances?
a) 0.401, 0.5
b) 0.903, 0.1
c) 0.304, 0.6
d) 0.205, 0
View Answer

Answer: d
Explanation: The weight of the classifier is calculated as,
α = (1 / 2) * ln((1 – error) / error)
Then for error = 0.4,
Weight α = (1 / 2) * ln((1 – 0.4) / 0.4)
= (0.5 * ln(0.6 / 0.4)
= (0.5 * ln(1.5)
= 0.5 * 0.41
= 0.205
For error = 0.5,
Weight α = (1 / 2) * ln((1 – 0.5) / 0.5)
= (0.5 * ln(0.5 / 0.5)
= (0.5 * ln(1)
= 0.5 * 0
= 0

12. The classifier weight for an instance will be less than zero if the error is greater than or equal to 0.5.
a) True
b) False
View Answer

Answer: a
Explanation: The weight of the classifier is calculated as α = (1 / 2) * ln((1 – error) / error). Consider a classifier instance with error = 0.8. Then the weight will be calculated as,
Weight α = (1 / 2) * ln((1 – 0.8) / 0.8)
= (0.5 * ln(0.2 / 0.8)
= (0.5 * ln(0.25)
= 0.5 * – 1.39
= – 0.695
So when the error (0.8) is greater than or equal to 0.5, the weight for an instance will be less than zero (- 0.695).

Sanfoundry Global Education & Learning Series – Machine Learning.

To practice all areas of Machine Learning, here is complete set of 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.