This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “PAC Learning”.

1. Who introduced the concept of PAC learning?

a) Francis Galton

b) Reverend Thomas Bayes

c) J.Ross Quinlan

d) Leslie Valiant

View Answer

Explanation: Valiant introduced PAC learning. Galton introduced Linear Regression. Quinlan introduced the Decision Tree. Bayes introduced Bayes’ rule and Naïve-Bayes theorem.

2. When was PAC learning invented?

a) 1874

b) 1974

c) 1984

d) 1884

View Answer

Explanation: Leslie Valiant was the inventor of PAC learning. He described it in 1984. It was introduced as a part of the Computational Learning Theory.

3. The full form of PAC is ______

a) Partly Approximation Computation

b) Probability Approximation Curve

c) Probably Approximately Correct

d) Partly Approximately Correct

View Answer

Explanation: Probably Approximately Correct tries to build a hypothesis that can predict with low generalization error (approximately correct), with high probability (probably).

4. What can be explained by PAC learning?

a) Sample Complexity

b) Overfitting

c) Underfitting

d) Label Function

View Answer

Explanation: PAC learning can give a rough estimate of the number of training examples required by the learning algorithm to develop a hypothesis with the desired accuracy.

5. What is the significance of epsilon in PAC learning?

a) Probability of approximation < epsilon

b) Maximum error < epsilon

c) Minimum error > epsilon

d) Probability of approximation = delta – epsilon

View Answer

Explanation: A concept is PAC learnable by L if L can output a hypothesis with error < epsilon. Hence the maximum error obtained by the hypothesis should be less than epsilon. Epsilon is usually 5% or 1%.

6. What is the significance of delta in PAC learning?

a) Probability of approximation < delta

b) Error < delta

c) Probability = 1 – delta

d) Probability of approximation = delta – epsilon

View Answer

Explanation: A concept C is PAC learnable by L if L can predict a hypothesis with a certain error with probability equal to 1-delta. Delta is usually very low.

7. One of the goals of PAC learning is to give __________

a) maximum accuracy

b) cross-validation complexity

c) error of classifier

d) computational complexity

View Answer

Explanation: PAC learning tells us about the amount of effort required for computation so that a learner can come up with a successful hypothesis with high probability.

8. A learner can be deemed consistent if it produces a hypothesis that perfectly fits the __________

a) cross-validation data

b) overall dataset

c) test data

d) training data

View Answer

Explanation: PAC learning is concerned with the behavior of the learning algorithm. The learner has access to only the training data set.

9. Number of hypothesizes |H| = 973, probability = 95%, error < 0.1. Find minimum number of training examples, m, required.

a) 98.8

b) 99.8

c) 99

d) 98

View Answer

Explanation: Probability = 1 – delta (d) = 0.95; d = 0.05. Error < epsilon (e); e = 0.1.

m >= 1/e (ln |H| + ln (1/d)) i.e. m> = 1/0.1(ln 973 + ln (1/0.05)) i.e. m>=98.8

Since number of training examples, m, has to be an integer, answer is 99.

10. In PAC learning, sample complexity grows as the logarithm of the number of hypothesizes.

a) False

b) True

View Answer

Explanation: Sample complexity is the number of training examples required to converge to a successful hypothesis. It is given by m >= 1/e (ln |H| + ln (1/d)), where m is the number of training examples and H is hypothesis space.

**Sanfoundry Global Education & Learning Series – Machine Learning**.

To practice all areas of Machine Learning, __ here is complete set of 1000+ Multiple Choice Questions and Answers__.

**If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]**