Machine Learning Questions and Answers – Optimality Conditions and Support Vectors

This set of Machine Learning Multiple Choice Questions & Answers (MCQs) focuses on “Optimality Conditions and Support Vectors”.

1. A Lagrange dual of a convex optimisation problem is another convex optimisation problem.
a) True
b) False
View Answer

Answer: a
Explanation: The optimisation problems can be either primal problems or dual problems. The solution to the dual problem provides a lower bound to the solution of the primal problem. And the Lagrange dual of a convex optimisation problem is another convex optimisation problem where the optimisation variables are the Lagrange multipliers of the original problem.

2. The difference between the primal and dual solutions is known as duality gap.
a) True
b) False
View Answer

Answer: a
Explanation: In optimisation problems the difference between the primal and dual solutions are known as the duality gap. And the duality gap is zero if and only if strong duality holds. Otherwise the gap is strictly positive and weak duality holds.

3. In optimisation problems Lagrangian is used to find out only the local minima of a function subject to certain constraints.
a) True
b) False
View Answer

Answer: b
Explanation: In optimisation problems Lagrangian is used to find out both the local minima and maxima of a function subject to certain equality constraints. Lagrangian is the function in which the constraints have been introduced by multiplying them by positive coefficients called Lagrange multipliers.
advertisement
advertisement

4. Karush–Kuhn–Tucker (KKT) conditions are second derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.
a) False
b) True
View Answer

Answer: a
Explanation: Karush–Kuhn–Tucker (KKT) conditions are not second derivative but are the first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. And sometimes it is also known as first-order necessary conditions.

5. When the constrained maximization/minimization problem is rewritten as a Lagrange function, then its optimal point is known as saddle point.
a) True
b) False
View Answer

Answer: a
Explanation: When the constrained maximization/minimization problem is rewritten as a Lagrange function, then its optimal point is known as saddle point. A saddle point or minimax point is a point on the surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero.

6. Support vector machine is a generative classifier.
a) True
b) False
View Answer

Answer: b
Explanation: A Support vector machine is a discriminative classifier. Because rather than modeling each class, SVMs simply find a line or curve that divides the classes from each other. And SVM is a discriminative classifier which tries to model by just depending on the observed data.

7. Which of the following statements is not true about Lagrange multipliers?
a) The basic idea behind this is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied
b) It is done by converting a constrained problem to an equivalent unconstrained problem with the help of certain unspecified parameters
c) The Hessian matrix determines the maxima, minima, or saddle points before the stationary points have been identified
d) Once the stationary points have been identified from the first-order necessary conditions, it determines the maxima, minima or saddle points
View Answer

Answer: c
Explanation: The Hessian matrix determines the maxima, minima, or saddle points only after the stationary points have been identified from the first-order necessary conditions. And for this, initially the constrained problem is converted into an unconstrained problem with the help of Lagrange multipliers.
advertisement

8. Let the problem is min f(x1, x2, …, xn) subject to h1(x1, x2, …, xn) = 0. And it converted it into min L(x1, x2, …, xn, λ) = min {f(x1, x2, …, xn) – λh1 (x1, x2, …, xn)}. Then L(x, λ), λ are known as Lagrangian function and Lagrangian function respectively.
a) True
b) False
View Answer

Answer: a
Explanation: Given the problem of optimisation Then L(x, λ) is known as the Lagrangian function and λ is an unspecified positive or negative constant called the Lagrange multiplier. The method of Lagrange multipliers is widely used to solve challenging constrained optimisation problems.

9. The training data points near to the separating hyperplane are known as support vectors.
a) True
b) False
View Answer

Answer: a
Explanation: The training data points near to the separating hyperplane are known as Support vectors. The SVM finds the best line of separation and the points closest to the line from both the classes. And these points are known as support vectors.
advertisement

10. Which of the following statements is not true about support vectors?
a) Support vectors are used to maximize the margin of the classifier
b) Deleting the support vectors will change the position of the hyperplane
c) The vectors that define the hyperplane are the support vectors
d) The extreme points in the data sets that define the hyperplane are not included in the support vectors
View Answer

Answer: d
Explanation: The extreme points in the data sets that define the hyperplane are the support vectors. It defines the hyperplane and is used to maximize the margin of the classifier. Deleting it will change the position of the hyperplane.

Sanfoundry Global Education & Learning Series – Machine Learning.

To practice all areas of Machine Learning, here is complete set of 1000+ Multiple Choice Questions and Answers.

If you find a mistake in question / option / answer, kindly take a screenshot and email to [email protected]

advertisement
advertisement
Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs!

Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest
Manish Bhojasia - Founder & CTO at Sanfoundry
Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn.

Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.