# Machine Learning Questions and Answers – Candidate Elimination Algorithm

This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “Version Spaces – Candidate Elimination Algorithm”.

1. The algorithm is trying to find a suitable day for swimming. What is the most general hypothesis?
a) A rainy day is a positive example
b) A sunny day is a positive example
c) No day is a positive example
d) Every day is a positive example

Explanation: The most general hypothesis must accept any type of data instance. In this case, the hypothesis that states that any day is a positive example accepts all the specific days as positive.

2. Candidate-Elimination algorithm can be described by ____________
a) just a set of candidate hypotheses
b) depends on the dataset
c) set of instances, set of candidate hypotheses
d) just a set of instances

Explanation: A set of instances is required. A set of candidate hypotheses are given. These are applied to the training data and the list of accurate hypotheses is output in accordance with the candidate-elimination algorithm.

3. How is the version space represented?
a) Least general members
b) Most general members
c) Most general and least general members
d) Arbitrary members chosen form hypothesis space

Explanation: The algorithm starts with the most general and most specific (least general members). Then it tries to specify more general members or generalize more specific members based on the data from the training examples.

4. Let G be the set of maximally general hypotheses. While iterating through the dataset, when is it changed for the first time?
a) Negative example is encountered for the first time
b) Positive example is encountered for the first time
c) First example encountered, irrespective of whether it is positive or negative
d) S, the set of maximally specific hypotheses, is changed

Explanation: The most general hypothesis states that any example is a positive example. So, it changes the first time when it encounters the first negative example. It takes the values of each attribute, other than the values, in the negative example.

5. Let S be the set of maximally specific hypotheses. While iterating through the dataset, when is it changed for the first time?
a) Negative example is encountered for the first time
b) Positive example is encountered for the first time
c) First example encountered, irrespective of whether it is positive or negative
d) G, the set of maximally general hypotheses, is changed

Explanation: The most specific hypothesis states that any example is a negative example. So, it changes the first time when it encounters the first positive example. It takes the values of each attribute in the positive example.

6. S = <sunny, warm, high, same>. Training data = <sunny, warm, normal, same> => Yes (positive example). How will S be represented after encountering this training data?
a) <sunny, warm, high, same>
b) <phi, phi, phi, phi>
c) <sunny, warm, ?, same>
d) <sunny, warm, normal, same>

Explanation: Initially the S hypothesis states that if the conditions are sunny, warm, high and same, then only the example will be positive. But it encounters an example that contains normal instead of high and is a positive example. Hence, that attribute is not specific and it needs to be generalized.

7. S = <phi, phi, phi, phi>Training data = <rainy, cold, normal, change> => No (negative example). How will S be represented after encountering this training data?
a) <phi, phi, phi, phi>
b) <sunny, warm, high, same>
c) <rainy, cold, normal, change>
d) <?, ?, ?, ?>

Explanation: Initially S is phi, which implies that the learner is yet to encounter a positive example. S will remain the same after encountering another negative example. It will change only after encountering a positive example.

8. G = <?, ?, ?, ?>. Training data = <sunny, warm, normal, same> => Yes (positive example). How will G be represented after encountering this training data?
a) <sunny, warm, normal, same>
b) <phi, phi, phi, phi>
c) <rainy, cold, normal, change>
d) <?, ?, ?, ?>

Explanation: Initially G is (?), which implies that the learner is yet to encounter a negative example. G will remain the same after encountering another positive example. It will change only after encountering a negative example.

9. G = (<sunny, ?, ?, ?> ; <?, warm, ?, ?> ; <?, ?, high, ?>). Training data = <sunny, warm, normal, same> => Yes (positive example). How will G be represented after encountering this training data?
a) <phi, phi, phi, phi>
b) (<sunny, ?, ?, ?> ; <?, warm, ?, ?> ; <?, ?, high, ?>)
c) (<sunny, ?, ?, ?> ; <?, warm, ?, ?>)
d) <?, ?, ?, ?>

Explanation: Initially, third hypothesis in set G states that irrespective of other attributes, if third attribute is high, the example is positive and it is negative, otherwise. But, in the given example, third attribute is normal (not high), but still the example is positive. Thus, the third hypothesis is incorrect. So, it is discarded.

10. It is possible that in the output, set S contains only phi.
a) False
b) True

Explanation: Initially, set S contains only phi. It states that no example is positive. If there is no positive example in the dataset, the set will not change. Even after a complete iteration, S will remain the same and will contain only phi.

Sanfoundry Global Education & Learning Series – Machine Learning.

To practice all areas of Machine Learning, here is complete set of 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]