Misplaced Pages

Proximal gradient method

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Proximal gradient methods)
This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (November 2013) (Learn how and when to remove this message)

Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems.

A comparison between the iterates of the projected gradient method (in red) and the Frank-Wolfe method (in green).

Many interesting problems can be formulated as convex optimization problems of the form

min x R N i = 1 n f i ( x ) {\displaystyle \operatorname {min} \limits _{x\in \mathbb {R} ^{N}}\sum _{i=1}^{n}f_{i}(x)}

where f i : R N R ,   i = 1 , , n {\displaystyle f_{i}:\mathbb {R} ^{N}\rightarrow \mathbb {R} ,\ i=1,\dots ,n} are possibly non-differentiable convex functions. The lack of differentiability rules out conventional smooth optimization techniques like the steepest descent method and the conjugate gradient method, but proximal gradient methods can be used instead.

Proximal gradient methods starts by a splitting step, in which the functions f 1 , . . . , f n {\displaystyle f_{1},...,f_{n}} are used individually so as to yield an easily implementable algorithm. They are called proximal because each non-differentiable function among f 1 , . . . , f n {\displaystyle f_{1},...,f_{n}} is involved via its proximity operator. Iterative shrinkage thresholding algorithm, projected Landweber, projected gradient, alternating projections, alternating-direction method of multipliers, alternating split Bregman are special instances of proximal algorithms.

For the theory of proximal gradient methods from the perspective of and with applications to statistical learning theory, see proximal gradient methods for learning.

Projection onto convex sets (POCS)

One of the widely used convex optimization algorithms is projections onto convex sets (POCS). This algorithm is employed to recover/synthesize a signal satisfying simultaneously several convex constraints. Let f i {\displaystyle f_{i}} be the indicator function of non-empty closed convex set C i {\displaystyle C_{i}} modeling a constraint. This reduces to convex feasibility problem, which require us to find a solution such that it lies in the intersection of all convex sets C i {\displaystyle C_{i}} . In POCS method each set C i {\displaystyle C_{i}} is incorporated by its projection operator P C i {\displaystyle P_{C_{i}}} . So in each iteration x {\displaystyle x} is updated as

x k + 1 = P C 1 P C 2 P C n x k {\displaystyle x_{k+1}=P_{C_{1}}P_{C_{2}}\cdots P_{C_{n}}x_{k}}

However beyond such problems projection operators are not appropriate and more general operators are required to tackle them. Among the various generalizations of the notion of a convex projection operator that exist, proximal operators are best suited for other purposes.

Examples

Special instances of Proximal Gradient Methods are

See also

Notes

  1. Daubechies, I; Defrise, M; De Mol, C (2004). "An iterative thresholding algorithm for linear inverse problems with a sparsity constraint". Communications on Pure and Applied Mathematics. 57 (11): 1413–1457. arXiv:math/0307152. Bibcode:2003math......7152D. doi:10.1002/cpa.20042.
  2. Details of proximal methods are discussed in Combettes, Patrick L.; Pesquet, Jean-Christophe (2009). "Proximal Splitting Methods in Signal Processing". arXiv:0912.3522 .

References

  • Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press.
  • Combettes, Patrick L.; Pesquet, Jean-Christophe (2011). Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Vol. 49. pp. 185–212.

External links

Category: