The vast majority of numerical optimization problems can be written in the form where are optimized variables, is the objective function, are inequality constraints and are equality constraints. Here we assume that , , and lie in and that constraints and define a non-empty set of feasible solutions.
Based on the properties of , , and , different approaches can be taken to solve the optimization problem. For example, if the objective function is convex and the constraint are affine the problem has a unique solution (optimal value of the objective function) that can be found using gradient descend. In this course we will focus on this type of problems which are classified as convex.
We will start with a general overview of Lagrangian Duality and KKT conditions which are applicable to both convex and non-convex optimization problems. Then we will describe two specific types of convex optimization problems: linear programming (LP) and quadratic programming (QP).