This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “Implementing Soft SVM with SGD”.
1. SVM uses Gradient descent (GD) to minimize its margin instead of using a Lagrange.
a) True
b) False
View Answer
Explanation: SVM do not use gradient descent to minimize its margin instead of using a Lagrange but both are used for different purposes. GD minimizes an unconstrained optimization problem and Lagrange multipliers used to convert a constrained optimization problem into an unconstrained problem.
2. Gradient descent and Lagrange are interchangeably used by SVM.
a) True
b) False
View Answer
Explanation: Gradient descent and Lagrange are not interchangeably used by SVM. They are used for different purposes. Gradient descent minimizes an unconstrained optimization problem. And Lagrange multipliers used to convert a constrained optimization problem into an unconstrained problem.
3. Let the optimization problem using Soft SVM is minimize a function and the update rule of SGD is w(t+1) = – (\(\frac {1}{\lambda t}\) ∑\(^t _{j=1}\)vj) then Vj is the sub gradient of the loss function.
a) True
b) False
View Answer
Explanation: In the given update rule of SGD, Vj is the sub gradient of the loss function at w(J) on the random example chosen at iteration j. Given the optimization problem and the Soft SVM rely on the SGD framework for solving regularized loss minimization problems and hence can rewrite the update rule as given.
4. Given the Soft SVM optimization problem and the update rule of SGD w(t+1) = – (\(\frac {1}{\lambda t}\) ∑\(^t _{j=1}\)vj). For the hinge loss, given an example (x, y), it can choose Vj to be one if y (w(J), x) ≥ 1.
a) True
b) False
View Answer
Explanation: Given the Soft SVM optimization problem and the update rule of SGD w(t+1) = – (\(\frac {1}{\lambda t}\) ∑\(^t _{j=1}\)vj). For the hinge loss, given an example (x, y), we can choose Vj to be zero if y (w(J), x) ≥ 1 and Vj = −y x otherwise.
5. Which of the following statements is not true about the soft margin solution for optimization problem?
a) Every constraint can be satisfied if slack variable is sufficiently large
b) C is a regularization parameter
c) Small C allows constraints to be hard to ignore
d) C = ∞ enforces all constraints and it implies hard margin
View Answer
Explanation: In the soft margin solution for optimization problems, small C allows constraints to be easily ignored (large margin) and not hard to ignore. Here C is a regularization parameter and C = ∞ enforces all constraints which implies hard margin. And every constraint can be satisfied if the slack variable is sufficiently large.
Sanfoundry Global Education & Learning Series – Machine Learning.
To practice all areas of Machine Learning, here is complete set of 1000+ Multiple Choice Questions and Answers.