Article snapshot taken from Wikipedia with creative commons attribution-sharealike license.
Give it a read and then ask your questions in the chat.
We can research this topic together.
Collinear gradients method (ColGM) is an iterative method of directional search for the local extremum of a smoothmultivariate function , which do moving towards the extremum along the vector such that the gradients , i.e. they are collinear vectors. This is a first-order method (it uses only the first derivatives ) with a quadratic convergence rate. It can be applied to functions of high dimension with several local extremes. GolGM can be attributed to the Truncated Newton methods family.
The concept of the method
For a smooth function in a relatively large vicinity of a point , there is a point , where the gradients and are collinear vectors. The direction to the extremum from the point will be the direction . The vector points to the maximum or minimum, depending on the position of the point . It can be in front or behind of relative to the direction to (see the picture). Next, we will consider minimization.
The next iteration of ColGM:
(1)
where the optimal is found analytically from the assumption that the one-dimensional function is quadratic:
(2)
Angle brackets are an inner product in the Euclidean space . If is a convex function in the vicinity of , then for the front point we get the number , for the back . In any case, we follow step by (1).
For a strictly convex quadratic function the ColGM step is
i.e. it is a Newton's step (a second-order method with a quadratic convergence rate), where is the Hesse matrix. Such steps ensure the quadratic convergence rate for ColGM.
In general, if has a variable convexity and saddle points are possible, then the minimization direction should be checked by the angle between the vectors and . If , then is the direction of maximization, and in (1) we should take with the opposite sign.
Search for collinear gradients
Collinearity of gradients is estimated by the residual of their directions, which has the form of a system of equations for search a root :
(3)
where the sign , this allows us to equally evaluate the collinearity of gradients, both co-directional and oppositely directed, .
System (3) is solved iteratively (sub-iterations ) by the conjugate gradient method, assuming that the system is linear in the -vicinity:
(4)
where vector , , , , the product of the Hesse matrix by is found by numerical differentiation:
(5)
where , is a small positive number such that .
The initial approximation is set at 45° to all coordinate axes and with -length:
(6)
The initial radius is the vicinity of the point and it is modifid:
(7)
Necessary . Here, the small positive number is noticeably larger than the machine epsilon.
Sub-iterations terminate when at least one of the conditions is met:
— accuracy achieved;
— convergence has stopped;
— redundancy of sub-iterations.
Algorithm for choosing the minimization direction
Parameters: .
Input data: .
. If then set from (7).
Find from (6).
Calculate . Find from (3) when .
If or or or { and } then set {, return and , stop}.
If then set else .
Find .
Searching for :
Memorize , , , , ;
Find . Calculate and . Find from (5) and assign ;
If then and return to step 7.2;
Restore , , , , ;
Set .
Perform sub-iteration from (4).
, Go to step 3
The parameter . For functions without saddle points, we recommend , . To "bypass" saddle points, we recommend , .
The described algorithm allows us to approximately find collinear gradients from the system of equations (3). Therefore, the resulting direction for the ColGM algorithm (1) will be approximate Newton direction (truncated Newton method).
Demonstrations
In all the demonstrations, ColGM shows convergence no worse and sometimes even better (for functions of variable convexity) than Newton's method.
The "rotated ellipsoid" test function
A strictly convex quadratic function:
In the drawing, three black starting points are set for . The gray dots are sub-iterations of with (shown as a dotted line, inflated for demonstration). Parameters , . It took one iteration for all and no more than two sub-iterations .
For (parameter ) with the starting point ColGM achieved with an accuracy of 1% in 3 iterations and 754 calculations and . Other first-order methods: Quasi-Newtonian BFGS (working with matrices) required 66 iterations and 788 calculations; conjugate gradients (Fletcher—Reeves) - 274 iterations and 2236 calculations; Newton's finite difference method — 1 iteration and 1001 calculations. Newton's method of the second order — 1 iteration.
As the dimension increases, computational errors in the implementation of the collinearity condition (3) may increase markedly. Because of this, the ColGM, in comparison with the Newton's method, in the considered example required more than one iteration.
Parameters , , . The descent trajectory of the ColGM completely coincides with the Newton's method. In the drawing, the blue starting point is , and the red one is . Unit vector of the gradient are drawn at each point .
ColGM is very economical in terms of the number of calculations and . Due to formula (2), it does not require expensive calculations of the step multiplier by linear search (for example, golden-section search, etc.).