Machine Learning Questions and Answers – Fundamental Theorem of PAC Learning

This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “Fundamental Theorem of PAC Learning”.

1. Any ERM rule is a successful PAC learner for hypothesis space H.
a) True
b) False
View Answer

Answer: a
Explanation: ERM rules try to reduce the hypothesis error on the training dataset, even if it overfits. PAC learner tries to calculate the probability of finding an instance in which the hypothesis cannot label correctly.

2. If distribution D assigns zero probability to instances where h not equal to c, then an error will be ______
a) 1
b) 0.5
c) 0
d) infinite
View Answer

Answer: c
Explanation: For every instance selected, h is equal to c. Thus no error occurs. Hence, the probability of finding an error will be zero.

3. If distribution D assigns zero probability to instances where h = c, then an error will be ______
a) Cannot be determined
b) 0.5
c) 1
d) 0
View Answer

Answer: c
Explanation: D assigned zero probability to instances where h = c. Thus, whichever instance is selected, h is not equal to c i.e. error. Since for every instance error is raised, the probability of finding an error is 1.
advertisement
advertisement

4. Error strongly depends on distribution D.
a) True
b) False
View Answer

Answer: a
Explanation: If D is a uniform probability distribution that assigns the same probability to every instance in X, then the error for the hypothesis will be the fraction of the total instance space that falls into the region where h and c disagree.

5. PAC learning was introduced by ____________
a) Vapnik
b) Leslie Valiant
c) Chervonenkis
d) Reverend Thomas Bayes
View Answer

Answer: b
Explanation: Leslie Valiant introduced PAC Learning in 1984. Vapnik and Chervonenkis introduce the idea of VC dimension. Thomas Bayes published Bayes’ Theorem.

6. Error is defined over the _____________
a) training set
b) test Set
c) domain set
d) cross-validation set
View Answer

Answer: c
Explanation: Error is defined over the entire distribution of instances-not simply over the training examples-because this is the true error one expects to encounter when actually using the learned hypothesis h on subsequent instances drawn from D.

7. The error of h with respect to c is the probability that a randomly drawn instance will fall into the region where _________
a) h and c disagree
b) h and c agree
c) h is greater than c but not less
d) h is lesser than c but not greater
View Answer

Answer: a
Explanation: The concepts c and h are depicted by the sets of instances within X that they label as positive. If c says the label is positive but h says a negative, error has occurred, whose probability is to be calculated.
advertisement

8. When was PAC learning invented?
a) 1954
b) 1964
c) 1974
d) 1984
View Answer

Answer: d
Explanation: PAC learning was introduced in the domain Computational Learning Theory. Leslie Valiant introduced the concept of PAC learning in 1984.

Sanfoundry Global Education & Learning Series – Machine Learning.

advertisement

To practice all areas of Machine Learning, here is complete set of 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.