Machine Learning Questions and Answers – Logistic Regression – Cost Function and Gradient Descent

This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “Logistic Regression – Cost Function and Gradient Descent”.

1. The cost function for logistic regression and linear regression are the same.
a) True
b) False
View Answer

Answer: d
Explanation: Logistic regression deals with classification based problems or probability based, whereas linear regression is more based on regression problems. Obviously, the two cost functions are different.

2. h(x) = y. What is the cost (h(x), y)?
a) -infinite
b) infinite
c) 0
d) always h(x)
View Answer

Answer: c
Explanation: The cost function is used to determine the similarity between the two parameters. The more the similarity, higher is the tendency of cost function approaching zero. Since h(x) = y here, the cost function is 0.

3. What is the generalized cost function?
a) cost(h(x),y) = -y*log(h(x)) – (1 – y)*log(1-h(x))
b) cost(h(x),y) = – (1 – y)*log(1-h(x))
c) cost(h(x),y) = -y*log(h(x))
d) cost(h(x),y) = y*log(h(x)) + (1 – y)*log(1-h(x))
View Answer

Answer: a
Explanation: cost(h(x),y) = -y*log(h(x)) when y = 1, and – (1 – y)*log(1-h(x)) when y = 0
Thus the generalized function cost(h(x),y) = -y*log(h(x)) – (1 – y)*log(1-h(x)) becomes
cost(h(x),y) = -y*log(h(x)) when y = 1 as (1 – y) is 0 and becomes – (1 – y)*log(1-h(x)) when y = 0.
advertisement
advertisement

4. Let m be the number of training instances. What is the summation of cost function multiplied by to get the gradient descent?
a) 1/m
b) m
c) 1 + m
d) 1 – m
View Answer

Answer: a
Explanation: Since the summation is taken of all the cost functions starting from training instance 1 to training instance m, an average needs to be taken to get the actual cost function. So, it is multiplied by 1/m.

5. y = 1. How does cost(h(x), y) change with h(x)?
a) cost(h(x), y) = infinite when h(x) = 1
b) cost(h(x), y) = 0 when h(x) = 0
c) cost(h(x), y) = 0 when h(x) = 1
d) it is independent of h(x)
View Answer

Answer: c
Explanation: Since, the actual output is 1, the calculated output tending toward 1 will reduce the cost function. Thus cost function is 0 when h(x) is 1 and it is infinite when h(x) = 0.

6. Who invented gradient descent?
a) Ross Quinlan
b) Leslie Valiant
c) Thomas Bayes
d) Augustin-Louis Cauchy
View Answer

Answer: d
Explanation: Cauchy invented gradient descent in 1847. Bayes invented Bayes’ theorem. Leslie Valiant introduced the idea of PAC learning. Quinlan is the founder of the machine learning algorithm Decision Trees.

7. When was gradient descent invented?
a) 1847
b) 1947
c) 1857
d) 1957
View Answer

Answer: a
Explanation: Augustin-Louis Cauchy, a French mathematician invented the concept of gradient descent in 1847. Since then, it has been modified a few times. Gradient descent algorithm has a lot of different applications.
advertisement

8. h(x) = 1, y = 0. What is the cost (h(x), y)?
a) -infinite
b) infinite
c) 0
d) always h(x)
View Answer

Answer: b
Explanation: The cost function determines the similarity between the actual output and the calculated output. The lesser the similarity, the higher is the cost function. It is maximum (infinite) when h(x) and y are the exact opposite.

Sanfoundry Global Education & Learning Series – Machine Learning.

advertisement

To practice all areas of Machine Learning, here is complete set of 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.