Misplaced Pages

Barzilai-Borwein method

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Mathematical optimization method

The Barzilai-Borwein method is an iterative gradient descent method for unconstrained optimization using either of two step sizes derived from the linear trend of the most recent two iterates.  This method, and modifications, are globally convergent under mild conditions, and perform competitively with conjugate gradient methods for many problems. Not depending on the objective itself, it can also solve some systems of linear and non-linear equations.

Method

To minimize a convex function f : R n R {\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} } with gradient vector g {\displaystyle g} at point x {\displaystyle x} , let there be two prior iterates, g k 1 ( x k 1 ) {\displaystyle g_{k-1}(x_{k-1})} and g k ( x k ) {\displaystyle g_{k}(x_{k})} , in which x k = x k 1 α k 1 g k 1 {\displaystyle x_{k}=x_{k-1}-\alpha _{k-1}g_{k-1}} where α k 1 {\displaystyle \alpha _{k-1}} is the previous iteration's step size (not necessarily a Barzilai-Borwein step size), and for brevity, let Δ x = x k x k 1 {\displaystyle \Delta x=x_{k}-x_{k-1}} and Δ g = g k g k 1 {\displaystyle \Delta g=g_{k}-g_{k-1}} .

A Barzilai-Borwein (BB) iteration is x k + 1 = x k α k g k {\displaystyle x_{k+1}=x_{k}-\alpha _{k}g_{k}} where the step size α k {\displaystyle \alpha _{k}} is either

α k L O N G = Δ x Δ x Δ x Δ g {\displaystyle \alpha _{k}^{LONG}={\frac {\Delta x\cdot \Delta x}{\Delta x\cdot \Delta g}}} , or

α k S H O R T = Δ x Δ g Δ g Δ g {\displaystyle \alpha _{k}^{SHORT}={\frac {\Delta x\cdot \Delta g}{\Delta g\cdot \Delta g}}} .

Barzilai-Borwein also applies to systems of equations g ( x ) = 0 {\displaystyle g(x)=0} for g : R n R n {\displaystyle g:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{n}} in which the Jacobian of g {\displaystyle g} is positive-definite in the symmetric part, that is, Δ x Δ g {\displaystyle \Delta x\cdot \Delta g} is necessarily positive.

Derivation

Despite its simplicity and optimality properties, Cauchy's classical steepest-descent method for unconstrained optimization often performs poorly. This has motivated many to propose alternate search directions, such as the conjugate gradient method. Jonathan Barzilai and Jonathan Borwein instead proposed new step sizes for the gradient by approximating the quasi-Newton method, creating a scalar approximation of the Hessian estimated from the finite differences between two evaluation points of the gradient, these being the most recent two iterates.

In a quasi-Newton iteration,

x k + 1 = x k B 1 g ( x k ) {\displaystyle x_{k+1}=x_{k}-B^{-1}g(x_{k})}

where B {\displaystyle B} is some approximation of the Jacobian matrix of g {\displaystyle g} (i.e. Hessian of the objective function) which satisfies the secant equation B k Δ x k = Δ g k {\displaystyle B_{k}\Delta x_{k}=\Delta g_{k}} . Barzilai and Borwein simplify B {\displaystyle B} with a scalar 1 / α {\displaystyle 1/\alpha } , which usually cannot exactly satisfy the secant equation, but approximate it as 1 α Δ x Δ g {\displaystyle {\frac {1}{\alpha }}\Delta x\approx \Delta g} . Approximations by two least-squares criteria are:

Minimize Δ x / α Δ g 2 {\displaystyle \|\Delta x/\alpha -\Delta g\|^{2}} with respect to α {\displaystyle \alpha } , yielding the long BB step, or

Minimize Δ x α Δ g 2 {\displaystyle \|\Delta x-\alpha \Delta g\|^{2}} with respect to α {\displaystyle \alpha } , yielding the short BB step.

Properties

In one dimension, both BB step sizes are equal and same as the classical secant method.

The long BB step size is the same as a linearized Cauchy step, i.e. the first estimate using a secant-method for the line search (also, for linear problems).  The short BB step size is same as a linearized minimum-residual step.  BB applies the step sizes upon the forward direction vector for the next iterate, instead of the prior direction vector as if for another line-search step.

Barzilai and Borwein proved their method converges R-superlinearly for quadratic minimization in two dimensions. Raydan demonstrates convergence in general for quadratic problems. Convergence is usually non-monotone, that is, neither the objective function nor the residual or gradient magnitude necessarily decrease with each iteration along a successful convergence toward the solution.

If f {\displaystyle f} is a quadratic function with Hessian A {\displaystyle A} , 1 / α L O N G {\displaystyle 1/\alpha ^{LONG}} is the Rayleigh quotient of A {\displaystyle A} by vector Δ x {\displaystyle \Delta x} , and 1 / α S H O R T {\displaystyle 1/\alpha ^{SHORT}} is the Rayleigh quotient of A {\displaystyle A} by vector A Δ x {\displaystyle {\sqrt {A}}\Delta x} (here taking A {\displaystyle {\sqrt {A}}} as a solution to ( A ) T A = A {\displaystyle ({\sqrt {A}})^{T}{\sqrt {A}}=A} , more at Definite matrix).

Fletcher compared its computational performance to conjugate gradient (CG) methods, finding CG tending faster for linear problems, but BB often faster for non-linear problems versus applicable CG-based methods.

BB has low storage requirements, suitable for large systems with millions of elements in x {\displaystyle x} .

α S H O R T α L O N G = c o s 2 ( {\displaystyle {\frac {\alpha ^{SHORT}}{\alpha ^{LONG}}}=cos^{2}(} angle between Δ x {\displaystyle \Delta x} and Δ g ) {\displaystyle \Delta g)} .

Modifications and related methods

Since being demonstrated by Raydan, BB is often applied with the non-monotone safeguarding strategy of Grippo, Lampariello, and Lucidi. This tolerates some rise of the objective, but excessive rise initiates a backtracking line search using smaller step sizes, to assure global convergence. Fletcher finds that allowing wider limits for non-monotonicity tend to result in more efficient convergence.

Others have identified a step size being the geometric mean between the long and short BB step sizes, which exhibits similar properties.

References

  1. Barzilai, Jonathan; Borwein, Jonathan M. (1988). "Two-Point Step Size Gradient Methods". IMA Journal of Numerical Analysis. 8: 141–148. doi:10.1093/imanum/8.1.141.
  2. ^ Raydan, Marcos (1993). "On the Barzilai and Borwein choice of steplength for the gradient method". IMA Journal of Numerical Analysis. 13 (3): 321–326. doi:10.1093/imanum/13.3.321. hdl:1911/101676.
  3. ^ Raydan, M. The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM Journal of Optimization 7, pp 26-33. 1997
  4. ^ Fletcher, R. (2005). "On the Barzilai–Borwein Method". In Qi, L.; Teo, K.; Yang, X. (eds.). Optimization and Control with Applications. Applied Optimization. Vol. 96. Boston: Springer. pp. 235–256. ISBN 0-387-24254-6
  5. A. Cauchy. Méthode générale pour la résolution des systèmes d’équations simultanées. C. R. Acad. Sci. Paris, 25:536–538, 1847.
  6. H. Akaike, On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method, Ann. Inst. Statist. Math Tokyo, 11 (1959), pp. 1–17
  7. L. Grippo, F. Lampariello, and S. Lucidi, “A nonmonotone line search technique for Newton’s method,” SIAM J. Numer. Anal., vol. 23, pp. 707–716, 1986
  8. Varadhan R, Roland C (2008). Simple and Globally Convergent Methods for Accelerating the Convergence of Any EM Algorithm. Scandinavian Journal of Statistics, 35(2), 335-353.
  9. Y. H. Dai, M. Al-Baali, and X. Yang, “A positive Barzilai-Borwein-like stepsize and an extension for symmetric linear systems,” in Numerical Analysis and Optimization. Cham, Switzerland: Springer, 2015, pp. 59-75.
  10. Dai, Yu-Hong; Huang, Yakui; Liu, Xin-Wei (2018). "A family of spectral gradient methods for optimization". arXiv:1812.02974 .
  11. Shuai Huang, Zhong Wan, A new nonmonotone spectral residual method for nonsmooth nonlinear equations, Journal of Computational and Applied Mathematics 313, pp 82-101, Elsevier, 2017

External links

Categories: