Machine Learning Questions and Answers – Boosting Weak Learnability

This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “Boosting: Weak Learnability”.

1. Boosting is a machine learning ensemble algorithm which converts weak learners to strong ones.
a) True
b) False
View Answer

Answer: a
Explanation: Boosting is a machine learning ensemble meta-algorithm which converts weak learners to strong ones. A weak learner is defined to be a classifier which is only slightly correlated with the true classification and a strong learner is a classifier that is arbitrarily well correlated with the true classification.

2. Which of the following statements is not true about boosting?
a) It uses the mechanism of increasing the weights of misclassified data in preceding classifiers
b) It mainly increases the bias and the variance
c) It tries to generate complementary base-learners by training the next learner on the mistakes of the previous learners
d) It is a technique for solving two-class classification problems
View Answer

Answer: b
Explanation: Boosting does not increase the bias and variance but it mainly reduces the bias and the variance. It is a technique for solving two-class classification problems. And it tries to generate complementary base-learners by training the next learner (by increasing the weights) on the mistakes (misclassified data) of the previous learners.

3. Boosting is a heterogeneous ensemble technique.
a) True
b) False
View Answer

Answer: b
Explanation: Boosting is not a heterogeneous ensemble but is a homogeneous ensemble. Homogeneous ensemble consists of members having a single-type base learning algorithm. Whereas a heterogeneous ensemble consists of members having different base learning algorithms.
advertisement
advertisement

4. The issues that boosting addresses are the bias-complexity tradeoff and computational complexity of learning.
a) True
b) False
View Answer

Answer: a
Explanation: The more expressive the hypothesis class the learner is searching over, the smaller the approximation error is, but the larger the estimation error becomes. And for many concept classes the task of finding an Empirical Risk Minimization hypothesis may be computationally infeasible.

5. Which of the following statements is not true about weak learners?
a) They can be used as the building blocks for designing more complex models by combining them
b) Boosting learns the weak learners sequentially in a very adaptive way
c) They are combined using a deterministic strategy
d) They have low bias
View Answer

Answer: d
Explanation: Weak learners do not have low bias but have high bias. Boosting primarily reduces the bias by combining the weak learners in a deterministic strategy. And boosting learns the weak learners sequentially in a very adaptive way.

6. Which of the following is not related to boosting?
a) Non uniform distribution
b) Re-weighting
c) Re-sampling
d) Sequential style
View Answer

Answer: c
Explanation: Re-sampling is done with the bagging technique. Boosting uses a non-uniform distribution, during the training the distribution will be modified and difficult samples will have higher probability. And it follows a sequential style to generate complementary base-learners by re-weighting the learner.

7. In ensemble method if the classifier is unstable, then we need to apply boosting.
a) True
b) False
View Answer

Answer: b
Explanation: If the classifier is unstable which means it has high variance, then we cannot apply boosting. We can use bagging if the classifier is unstable. If the classifier is steady and straightforward (high bias), then we have to apply boosting.
advertisement

8. The original boosting method requires a very large training sample.
a) True
b) False
View Answer

Answer: a
Explanation: The disadvantage of the original boosting method is that it requires a very large training sample. And the sample should be divided into three and furthermore, the second and third classifiers are only trained on a subset of the previous classifier’s error.

9. Which of the following is not true about boosting?
a) It considers the weightage of the higher accuracy sample and lower accuracy sample
b) It helps when we are dealing with bias or under-fitting in the data set
c) Net error is evaluated in each learning step
d) It always considers the overfitting or variance issues in the data set
View Answer

Answer: d
Explanation: One of the main disadvantages of boosting is that it often ignores overfitting or variance issues in the data set. And it mainly reduces the bias and also the variance. All other three options are the advantages of boosting.
advertisement

10. Boosting can be used for spam filtering.
a) False
b) True
View Answer

Answer: b
Explanation: Boosting can be used for spam filtering, where the first classifier can be used to distinguish between emails from contacts and others. And the subsequent classifiers used to find examples wrongly classified as spam and find words/phrases appearing in spam. And finally combine it to the final classifier that predicts spam accurately.

11. Consider there are 7 weak learners, out of which 4 learners are voted as FAKE for a social media account and 3 learners are voted as REAL. What will be the final prediction for the account if we are using a majority voting method?
a) FAKE
b) REAL
c) Undefined
d) Error
View Answer

Answer: a
Explanation: As we are using a majority voting method here it will be considering the prediction of weak learners with higher number of votes. And here 4 learners out of 7 are voted as FAKE. And it is the higher number of votes considering the 3 votes as REAL. So the final prediction will be FAKE.

12. Assume that we are training a boosting classifier using decision stumps on the given dataset. Then which of the given examples will have their weights increased at the end of the first iteration?

a) Circle
b) Square
c) Both
d) No increment in weight
View Answer

Answer: b
Explanation: The square example will have their weights increased at the end of the first iteration. Decision stump is a 1-level decision tree and is a test based on one feature. And the decision stump with the least error in the first iteration is constant over the whole domain. So it only predicts incorrectly on the square example.

13. Assume that we are training a boosting classifier using decision stumps on the given dataset. At the least how much iteration does it need to achieve zero training error?

a) 1
b) 2
c) 3
d) 0
View Answer

Answer: c
Explanation: It will require at least three iterations to achieve zero training error. First iteration will misclassify the square example. Second iteration will misclassify the two square examples. And finally the third iteration will misclassify the remaining two square examples which can yield zero training error. So it requires at least three iterations.

Sanfoundry Global Education & Learning Series – Machine Learning.

To practice all areas of Machine Learning, here is complete set of 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.