Machine Learning Questions and Answers – Implementing Soft SVM with SGD

This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “Implementing Soft SVM with SGD”.

1. SVM uses Gradient descent (GD) to minimize its margin instead of using a Lagrange.
a) True
b) False
View Answer

Answer: b
Explanation: SVM do not use gradient descent to minimize its margin instead of using a Lagrange but both are used for different purposes. GD minimizes an unconstrained optimization problem and Lagrange multipliers used to convert a constrained optimization problem into an unconstrained problem.

2. Gradient descent and Lagrange are interchangeably used by SVM.
a) True
b) False
View Answer

Answer: b
Explanation: Gradient descent and Lagrange are not interchangeably used by SVM. They are used for different purposes. Gradient descent minimizes an unconstrained optimization problem. And Lagrange multipliers used to convert a constrained optimization problem into an unconstrained problem.

3. Let the optimization problem using Soft SVM is minimize a function and the update rule of SGD is w(t+1) = – (\(\frac {1}{\lambda t}\) ∑\(^t _{j=1}\)vj) then Vj is the sub gradient of the loss function.
a) True
b) False
View Answer

Answer: a
Explanation: In the given update rule of SGD, Vj is the sub gradient of the loss function at w(J) on the random example chosen at iteration j. Given the optimization problem and the Soft SVM rely on the SGD framework for solving regularized loss minimization problems and hence can rewrite the update rule as given.
advertisement
advertisement

4. Given the Soft SVM optimization problem and the update rule of SGD w(t+1) = – (\(\frac {1}{\lambda t}\) ∑\(^t _{j=1}\)vj). For the hinge loss, given an example (x, y), it can choose Vj to be one if y (w(J), x) ≥ 1.
a) True
b) False
View Answer

Answer: b
Explanation: Given the Soft SVM optimization problem and the update rule of SGD w(t+1) = – (\(\frac {1}{\lambda t}\) ∑\(^t _{j=1}\)vj). For the hinge loss, given an example (x, y), we can choose Vj to be zero if y (w(J), x) ≥ 1 and Vj = −y x otherwise.

5. Which of the following statements is not true about the soft margin solution for optimization problem?
a) Every constraint can be satisfied if slack variable is sufficiently large
b) C is a regularization parameter
c) Small C allows constraints to be hard to ignore
d) C = ∞ enforces all constraints and it implies hard margin
View Answer

Answer: c
Explanation: In the soft margin solution for optimization problems, small C allows constraints to be easily ignored (large margin) and not hard to ignore. Here C is a regularization parameter and C = ∞ enforces all constraints which implies hard margin. And every constraint can be satisfied if the slack variable is sufficiently large.

Sanfoundry Global Education & Learning Series – Machine Learning.

To practice all areas of Machine Learning, here is complete set of 1000+ Multiple Choice Questions and Answers.

advertisement

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.