Naive-Bayes Algorithm Questions and Answers

This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “Naive-Bayes Algorithm”.

1. Naïve Bayes classifier algorithms are mainly used in text classification.
a) True
b) False
View Answer

Answer: a
Explanation: Naïve Bayes classifier is a simple probabilistic framework for solving a classification problem. It is used to organize text into categories based on the bayes probability and is used to train data to learn document-class probabilities before classifying text documents.

2. What is the formula for Bayes’ theorem? Where (A & B) and (H & E) are events and P(B), P(H) & P(E) ≠ 0.
a) P(H|E) = [P(E|H) * P(E)] / P(H)
b) P(A|B) = [P(A|B) * P(A)] / P(B)
c) P(H|E) = [P(H|E) * P(H)] / P(E)
d) P(A|B) = [P(B|A) * P(A)] / P(B)
View Answer

Answer: d
Explanation: Here, P(A) &P(H) is the probability of hypothesis before observing the evidence, P(B) & P(E) is the probability of evidence, P(A|B) & P(H|E) is the posterior probability and P(B|A) & P(E|H) is the likelihood probability. Since Bayes Theorem states that:
Conditional Probability of A given B = \(\frac {Conditional \, probability \, of \, B \, given \, A \, * \, Prior probability \, of \, A}{Prior \, probability \, of \, B}\)

3. Which of the following statement is not true about Naïve Bayes classifier algorithm?
a) It cannot be used for Binary as well as multi-class classifications
b) It is the most popular choice for text classification problems
c) It performs well in Multi-class prediction as compared to other algorithms
d) It is one of the fast and easy machine learning algorithms to predict a class of test datasets
View Answer

Answer: a
Explanation: Naïve Bayes algorithm can be used for binary as well as multi-class classifications. It is a parametric algorithm, which means it requires a fixed set of assumptions or parameters to simplify the machine’s learning process.
advertisement
advertisement

4. What is the assumptions of Naïve Bayesian classifier?
a) It assumes that features of a data are completely dependent on each other
b) It assumes that each input variable is dependent and the model is not generative
c) It assumes that each input attributes are independent of each other and the model is generative
d) It assumes that the data dimensions are dependent and the model is generative
View Answer

Answer: c
Explanation: The assumptions of Naïve Bayes Classifier is that it assumes that each input attributes are independent of each other which is the Naïve part, and the model is generative which is the Bayesian part.

5. Which of the following is not a supervised machine learning algorithm?
a) Decision tree
b) SVM for classification problems
c) Naïve Bayes
d) K-means
View Answer

Answer: d
Explanation: Decision tree, SVM (Support vector machines) for classification problems and Naïve Bayes are the examples of supervised machine learning algorithm. K-means is an example of unsupervised machine learning algorithm.

6. Which one of the following terms is not used in the Bayes’ Theorem?
a) Prior
b) Unlikelihood
c) Posterior
d) Evidence
View Answer

Answer: b
Explanation: The terms Evidence, Prior, Likelihood and Posterior are used in the Bayes’ Theorem. But, the term unlikelihood is not used in the Bayes’ Theorem. Bayes Theorem states that Posterior = (Likelihood * Prior) / Evidence.

7. Is the assumption of the Naïve Bayes algorithm a limitation to use it?
a) True
b) False
View Answer

Answer: a
Explanation: It’s true that the assumption of the Naïve Bayes’ algorithm is a limitation to use it since implicitly it assumes that all the input attributes are mutually independent of each other. But in real life it is almost impossible that we get a set of input attributes which are independent.
advertisement

8. In which of the following case the Naïve Bayes’ algorithm does not work well?
a) When faster prediction is required
b) When the Naïve assumption holds true
c) When there is the case of Zero Frequency
d) When there is a multiclass prediction
View Answer

Answer: c
Explanation: When there is a case of “Zero Frequency”, the categorical variable is not detected and hence the classifier will not be able to make prediction with an assumption of “Zero” probability.

9. There are two boxes. The first box contains 3 white and 2 red balls whereas the second contains 5 white and 4 red balls. A ball is drawn at random from one of the two boxes and is found to be white. Find the probability that the ball was drawn from the second box?
a) 53/50
b) 50/104
c) 54/104
d) 54/44
View Answer

Answer: b
Explanation: Let the first box be A and the second box be B
Then probability of choosing one box from the two is P(A) = 1/2 and P(B) = 1/2
As given in the question we have to find the probability that the white ball was drawn from the second box = P(B/W)
Now,
P(W/A) = 3/5 and P(W/B) = 5/9
According to Bayes Theorem we know that,
P(B/W) = \(\frac {P(W/B) * P(B)}{P(W/B) * P(B) + P(W/A) * P(A)}\)
P(B/W) = \(\frac {5/9 * 1/2}{(5/9 * 1/2) + (3/5 * 1/2)}\)
P(B/W) = \(\frac {5/18}{5/18 + 3/10}\)
P(B/W) = \(\frac {5/18}{104/180}\)
P(B/W) = 50/104
advertisement

10. Which one of the following models is a generative model used in machine learning?
a) Linear Regression
b) Logistic Regression
c) Naïve Bayes
d) Support vector machines
View Answer

Answer: c
Explanation: Naïve Bayes is a type of generative model which is used in machine learning. Linear Regression, Logistic Regression and Support vector machines are the types of discriminative models which are used in machine learning.

11. The number of balls in three boxes are as follows

Box Green Blue Yellow
A 3 2 1
B 2 1 2
C 4 2 3

One box is chosen at random and two balls are drawn from it. The balls are green and blue. What is the probability that the ball chosen are from the first box?
a) 37/18
b) 15/56
c) 18/37
d) 56/15
View Answer

Answer: c
Explanation: The probability of choosing one box out of three boxes is P(A) = P(B) = P(C) = 1/3.
Here the event (E) is choosing the green and blue balls from the random box.
Therefore, P(E|A) = \(\frac {^3C_1*^2C_1}{^6C_2}\) = 6/15 = 2/5
P(E|B) = \(\frac {^2C_1*^1C_1}{^5C_2}\) = 2/10 = 1/5
P(E|C) = \(\frac {^4C_1*^2C_1}{^9C_2} = \frac {8}{72/2}\) = 2/9
According to Bayes Theorem;
P(A|E) = P(E|A) / [P(E|A) + P(E|B) + P(E|C)]
= \(\frac {2/5}{(2/5) + (1/5) + (2/9)}\)
= 18/37

12. Identify the parametric machine learning algorithm.
a) CNN (Convolutional neural network)
b) KNN (K-Nearest Neighbours)
c) Naïve Bayes
d) SVM (Support vector machines)
View Answer

Answer: c
Explanation: In machine learning, the algorithms which can simplify a function by collecting information about its prediction within a finite set of parameters is defined as parametric machine learning algorithm. Naïve Bayes is a parametric machine learning algorithm whereas CNN, KNN and SVM are non-parametric machine learning algorithms.

13. Which one of the following applications is not an example of Naïve Bayes algorithm?
a) Spam filtering
b) Text classification
c) Stock market forecasting
d) Sentiment analysis
View Answer

Answer: c
Explanation: Stock market forecasting is one of the most core financial tasks of KNN (K-Nearest Neighbours). Spam filtering, text classification and sentiment analysis is the application of Naïve Bayes algorithm, which uses Bayes theorem of probability for prediction of unknown classes.

14. Arrange the following steps in sequence in order to calculate the probability of an event through Naïve Bayes classifier.
I. Find the likelihood probability with each attribute for each class.
II. Calculate the prior probability for given class labels.
III. Put these values in Bayes formula and calculate posterior probability.
IV. See which class has a higher probability, given the input belongs to the higher probability class.
a) I → II → III → IV
b) II → I → III → IV
c) III → II → I → IV
d) II → III → I → IV
View Answer

Answer: b
Explanation: The sequence in which Naïve Bayes calculates the probability of an event is:
II. Calculate the prior probability for given class labels.
I. Find the likelihood probability with each attribute for each class.
III. Put these values in Bayes formula and calculate posterior probability.
IV. See which class has a higher probability, given the input belongs to the higher probability class.

15. “It is easy and fast to predict the class of the test data set by using Naïve Bayes algorithm”.
Which of the following statement contradicts the above given statement?
a) Because there is no iteration
b) Because there is no epoch
c) Because there is an error back propagation
d) Because there are no operations involved in solving a matrix problem
View Answer

Answer: c
Explanation: Naïve Bayes algorithm is easy and fast to predict the class of the test data set because there is no iteration involved, there is no epoch, there are no operations involved in solving a matrix problem and there is no error back propagation.

Sanfoundry Global Education & Learning Series – Machine Learning.

To practice all areas of Machine Learning, here is complete set of 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.