Ensemble Learning Questions and Answers

This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “Ensemble Learning”.

1. Which of the following statements is false about Ensemble voting?
a) It takes a linear combination of the learners
b) It takes non-linear combination of the learners
c) It is the simplest way to combine multiple classifiers
d) It is also known as ensembles and linear opinion pools
View Answer

Answer: b
Explanation: Voting doesn’t take non-linear combination of the learners. It is the simplest way to combine multiple classifiers, which corresponds to taking a linear combination of the learners (yi = ∑j wjdji where wj ≥ 0, ∑j wj = 1, wj is the weight of learner j and dji is the vote of learner j for class Ci). So this is also known as ensembles and linear opinion pools.

2. In the simplest case of voting, all the learners are given equal weight.
a) True
b) False
View Answer

Answer: a
Explanation: In the simplest case, all learners are given equal weight and here we have simple voting that corresponds to taking an average. There are also other combination rules and taking a weighted sum is only one of such possibilities.

3. With the product rule, if one learner has an output of 0, the overall output goes to zero.
a) True
b) False
View Answer

Answer: a
Explanation: With the product rule (yi = Πjdji where dji is the vote of learner j for class Ci), each learner has veto power. That is regardless of the other ones, if one learner has an output of 0, the overall output goes to 0.
advertisement
advertisement

4. In plurality voting the winner is the class with maximum number of votes.
a) True
b) False
View Answer

Answer: b
Explanation: Plurality voting in classification is where the class having the maximum number of votes is the winner. In plurality voting a classification of an unlabelled instance is performed according to the class that obtains the highest number of votes. So in reality plurality voting is commonly used to solve the multi class problems.

5. In majority voting, the winning class gets more than half of the total votes.
a) True
b) False
View Answer

Answer: a
Explanation: It is majority voting, when there are two classes and the winning class gets more than half of the votes. Here every model makes a prediction (votes) for each test instance and the final output prediction (votes) is the one that receives more than half of the votes.

6. Which of the following statements is true about the combination rules?
a) Maximum rule is pessimistic
b) Sum rule takes the weighted sum of vote of each learner for each class
c) Median rule is more robust to outliers
d) Minimum rule is optimistic
View Answer

Answer: c
Explanation: Median rule is more robust to outliers. If you throw away the largest and smallest values (predictions) in the data set, then the median doesn’t change. The sum rule takes the sum of vote of each learner for each class, maximum rule is optimistic and minimum rule is pessimistic.

7. Hard voting is where the model is selected from an ensemble to make the final prediction using simple majority vote.
a) True
b) False
View Answer

Answer: d
Explanation: In hard voting, a model is selected from an ensemble by a simple majority vote to make the final prediction for accuracy. Here every individual classifier votes for a class, and the majority class wins. So it simply aggregates the predictions of each classifier and predicts the class that gets the most votes.
advertisement

8. Borda count takes the rankings of the class supports into consideration unlike the voting.
a) True
b) False
View Answer

Answer: a
Explanation: Borda count can rank order the classifier outputs. The classes can easily be rank ordered with respect to the support they receive from the classifier. Where the voting considers the support of the winning classes only and ignores the support that non winning classes may receive.

9. Which of the following is a solution for the problem, where the classifiers erroneously give unusual low or high support to a particular class?
a) Maximum rule
b) Minimum rule
c) Product rule
d) Trimmed mean rule
View Answer

Answer: d
Explanation: Trimmed rule can be used to avoid the damage done by the unusual vote given by the classifiers. It discards the decisions of those classifiers with the highest and lowest support before calculating the mean. And the mean is calculated on the remaining supports, avoiding the extreme values of support.
advertisement

10. The weighted average rule combines the mean and the weighted majority voting rules.
a) True
b) False
View Answer

Answer: a
Explanation: The weighted average rule combines the mean and the weighted majority voting rules. It makes use of weighted majority voting and the ensemble prediction is calculated as the average of the member predictions.

11. Assume we are combining three classifiers that classify a training sample as given in the table. Then what is the class of the samples using majority voting?

Classifier Class label
C1 0
C2 0
C3 1

a) 0
b) 1
c) 2
d) New class
View Answer

Answer: a
Explanation: In majority voting the class label (y) is predicted as, Y = mode {P1, P2, …, Pn} where P1, P2, …, Pn are the predictions of n classifiers that are combined. Here P1 = 0, P2 = 0, …, P3 = 1. And Y can be calculated as,
Y = mode {P1, P2, P3}
= mode {0, 0, 1}
= 0

12. Assume we are combining eight classifiers that classify a training sample as given in the table. Then what is the class of the samples using simple majority voting?

Classifier Class label
C1 0
C2 0
C3 1
C4 0
C5 2
C6 3
C7 0
C8 0

a) 1
b) 2
c) 0
d) 3
View Answer

Answer: c
Explanation: Majority voting has three flavors. And one among them is depending on whether the ensemble decision is the class predicted by at least one more than half the number of classifiers. And it is known as simple majority voting. Total number of classifiers is 8 and the number of classifiers predicts class label 0 is 5. So the number of classifiers predicts class label 0 > 4 (Total number of classifiers / 2). And the class label for the samples is 0.

13. Assume we are combining three classifiers that classify a training sample and the probabilities are given in the table. Given that it assigns equal weights to all classifiers w1=1, w2=1, w3=1. What is the class of the samples using weighted majority voting?

Class label 0 Class label 1 Class label 2
Classifier 1 0.3 0.5 0.2
Classifier 2 0.4 0.3 0.3
Classifier 3 0.2 0.4 0.4

a) Class 0
b) Class 1
c) Class 2
d) New class
View Answer

Answer: d
Explanation: Given the table about the probabilities of samples classified to each class label by three classifiers. And assigns equal weights to all classifiers (w1 = w2 = w3 = 1). Then we have,

Class label 0 Class label 1 Class label 2
Classifier 1 w1 * 0.3 = 1 * 0.3 = 0.3 w1 * 0.5 = 1 * 0.5 = 0.5 w1 * 0.2 = 1 * 0.2 = 0.2
Classifier 2 w2 * 0.4 = 1 * 0.4 = 0.4 w2 * 0.3 = 1 * 0.3 = 0.3 w2 * 0.3 = 1 * 0.3 = 0.3
Classifier 3 w3 * 0.2 = 1 * 0.2 = 0.2 w3 * 0.4 = 1 * 0.4 = 0.4 w3 * 0.4 = 1 * 0.4 = 0.4
Weighted average (0.3 + 0.4 + 0.2) / 3 = 0.3 (0.5 + 0.3 + 0.4) / 3 = 0.4 (0.2 + 0.3 + 0.4) / 3 = 0.3

From the table above the class 1 has the highest weighted average probability, thus we classify the sample as class 1.

More MCQs on Ensemble Learning:

Sanfoundry Global Education & Learning Series – Machine Learning.

To practice all areas of Machine Learning, here is complete set of 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.