Misplaced Pages

Wolfe duality

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Wolfe dual problem)
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Wolfe duality" – news · newspapers · books · scholar · JSTOR (May 2012) (Learn how and when to remove this message)

In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lower bound for a minimization problem can be found because of the weak duality principle.

Mathematical formulation

For a minimization problem with inequality constraints,

minimize x f ( x ) s u b j e c t t o g i ( x ) 0 , i = 1 , , m {\displaystyle {\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&f(x)\\&\operatorname {subject\;to} &&g_{i}(x)\leq 0,\quad i=1,\dots ,m\end{aligned}}}

the Lagrangian dual problem is

maximize u inf x ( f ( x ) + j = 1 m u j g j ( x ) ) s u b j e c t t o u i 0 , i = 1 , , m {\displaystyle {\begin{aligned}&{\underset {u}{\operatorname {maximize} }}&&\inf _{x}\left(f(x)+\sum _{j=1}^{m}u_{j}g_{j}(x)\right)\\&\operatorname {subject\;to} &&u_{i}\geq 0,\quad i=1,\dots ,m\end{aligned}}}

where the objective function is the Lagrange dual function. Provided that the functions f {\displaystyle f} and g 1 , , g m {\displaystyle g_{1},\ldots ,g_{m}} are convex and continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem

maximize x , u f ( x ) + j = 1 m u j g j ( x ) s u b j e c t t o f ( x ) + j = 1 m u j g j ( x ) = 0 u i 0 , i = 1 , , m {\displaystyle {\begin{aligned}&{\underset {x,u}{\operatorname {maximize} }}&&f(x)+\sum _{j=1}^{m}u_{j}g_{j}(x)\\&\operatorname {subject\;to} &&\nabla f(x)+\sum _{j=1}^{m}u_{j}\nabla g_{j}(x)=0\\&&&u_{i}\geq 0,\quad i=1,\dots ,m\end{aligned}}}

is called the Wolfe dual problem. This problem employs the KKT conditions as a constraint. Also, the equality constraint f ( x ) + j = 1 m u j g j ( x ) {\displaystyle \nabla f(x)+\sum _{j=1}^{m}u_{j}\nabla g_{j}(x)} is nonlinear in general, so the Wolfe dual problem may be a nonconvex optimization problem. In any case, weak duality holds.

See also

References

  1. Philip Wolfe (1961). "A duality theorem for non-linear programming". Quarterly of Applied Mathematics. 19 (3): 239–244. doi:10.1090/qam/135625.
  2. "Chapter 3. Duality in convex optimization" (PDF). October 30, 2011. Retrieved May 20, 2012.
  3. Geoffrion, Arthur M. (1971). "Duality in Nonlinear Programming: A Simplified Applications-Oriented Development". SIAM Review. 13 (1): 1–37. doi:10.1137/1013001. JSTOR 2028848.
Stub icon

This applied mathematics–related article is a stub. You can help Misplaced Pages by expanding it.

Categories: