# Machine Learning Questions and Answers – Find-S Algorithm

This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “Version Spaces – Find-S Algorithm”.

1. What is present in the version space of the Find-S algorithm in the beginning?
a) Set of all hypotheses H
b) Both maximally general and maximally specific hypotheses
c) Maximally general hypothesis
d) Maximally specific hypothesis

Explanation: Initially, only the maximally specific hypothesis is contained. That is generalized step by step after encountering every positive example. At any stage, the hypothesis is the most specific hypothesis consistent with training data.

2. When does the hypothesis change in the Find-S algorithm, while iteration?
a) Any example (positive or negative) is encountered
b) Amy negative example is encountered
c) Positive Example inconsistent with the hypothesis is encountered
d) Any positive example is encountered

Explanation: Find-S algorithm does not care about any negative example. It only changes when a positive example is encountered, which is inconsistent with the given hypothesis. The attribute(s) which is inconsistent is changed into a more generalized form.

3. What is one of the assumptions of the Find-S algorithm?
b) The most specific hypothesis is also the most general hypothesis
c) All training data are correct (there is no noise)
d) Overfitting does not occur

Explanation: Since no negative examples are being considered, a huge part of the data is discarded. In order to output an accurate hypothesis, the remaining dataset must be noise free and adequate.

4. What is one of the advantages of the Find-S algorithm?
a) Computation is faster than other concept learning algorithms
b) All correct hypotheses are output
c) Most generalized hypothesis is output
d) Overfitting does not occur

Explanation: All negative data are discarded. Version Space consists of only one hypothesis, whereas the version space of other algorithms consists of more than one hypothesis. At each step, the version space is compared with the training data instance. So, for Find-S computation is much faster.

5. How does the hypothesis change gradually?
a) Specific to Specific
b) Specific to General
c) General to Specific
d) General to General

Explanation: Initially, the hypothesis is most specific – consists of only phi. Gradually, after encountering each new positive example, it generalizes with a change in attributes, to remain consistent with training data.

6. S = <phi, phi, phi>Training data = <rainy, cold, white> => No (negative example). How will S be represented after encountering this training data?
a) <phi, phi, phi>
b) <sunny, warm, white>
c) <rainy, cold, black>
d) <?, ?, ?>

Explanation: When a negative example is encountered, the Find-S algorithm ignores it. Hence, the hypothesis remains unchanged. It will only a change, when a positive example inconsistent with data is encountered.

7. What is one of the drawbacks of the Find-S algorithm?
a) Computation cost is high
b) Time-ineffective
c) All correct hypotheses are not output
d) Most specific accurate hypothesis is not output

Explanation: The hypothesis generated is always the most specific one at each step. A more generalized hypothesis can be there but it is not considered as negative examples are discarded. Thus, hypotheses that are consistent with training data may not be output by the learner.

8. Noise or errors in the dataset can severely affect the performance of the Find-S algorithm.
a) True
b) False

Explanation: The algorithm ignores negative examples. Since a huge part of the dataset is discarded, the accuracy of the learned hypothesis becomes heavily dependent on the remaining portion. If there is error or noise in this part, huge inaccuracies may occur.

9. S = <phi, phi, phi> Training data = <square, pointy, white> => Yes (positive example). How will S be represented after encountering this training data?
a) <phi, phi, phi>
b) <square, pointy, white >
c) <circular, blunt, black>
d) <?, ?, ? >

Explanation: Initially, S contains phi, which implies that no example is positive. It encounters a positive example, which is inconsistent with the current hypothesis. So, it generalizes accordingly to approve the new example. It thus takes the values of the training instance.

10. The algorithm accommodates all the maximally specific hypotheses.
a) True
b) False

Explanation: S contains phi initially. Then it gradually generalizes, with new training examples. But it only generalizes in a particular order and never backtracks. It never considers a different branch which may lead to a different target concept.

Sanfoundry Global Education & Learning Series – Machine Learning.

To practice all areas of Machine Learning, here is complete set of 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]