Misplaced Pages

Nelder–Mead method

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Nelder-Mead method) Numerical optimization algorithm Not to be confused with Dantzig's simplex algorithm for the problem of linear optimization.
An iteration of the Nelder-Mead method over two-dimensional space.
Search over the Rosenbrock banana function
Search over Himmelblau's function
Nelder–Mead minimum search of Simionescu's function. Simplex vertices are ordered by their value, with 1 having the lowest (best) value.

The Nelder–Mead method (also downhill simplex method, amoeba method, or polytope method) is a numerical method used to find the minimum or maximum of an objective function in a multidimensional space. It is a direct search method (based on function comparison) and is often applied to nonlinear optimization problems for which derivatives may not be known. However, the Nelder–Mead technique is a heuristic search method that can converge to non-stationary points on problems that can be solved by alternative methods.

The Nelder–Mead technique was proposed by John Nelder and Roger Mead in 1965, as a development of the method of Spendley et al.

Overview

The method uses the concept of a simplex, which is a special polytope of n + 1 vertices in n dimensions. Examples of simplices include a line segment in one-dimensional space, a triangle in two-dimensional space, a tetrahedron in three-dimensional space, and so forth.

The method approximates a local optimum of a problem with n variables when the objective function varies smoothly and is unimodal. Typical implementations minimize functions, and we maximize f ( x ) {\displaystyle f(\mathbf {x} )} by minimizing f ( x ) {\displaystyle -f(\mathbf {x} )} .

For example, a suspension bridge engineer has to choose how thick each strut, cable, and pier must be. These elements are interdependent, but it is not easy to visualize the impact of changing any specific element. Simulation of such complicated structures is often extremely computationally expensive to run, possibly taking upwards of hours per execution. The Nelder–Mead method requires, in the original variant, no more than two evaluations per iteration, except for the shrink operation described later, which is attractive compared to some other direct-search optimization methods. However, the overall number of iterations to proposed optimum may be high.

Nelder–Mead in n dimensions maintains a set of n + 1 test points arranged as a simplex. It then extrapolates the behavior of the objective function measured at each test point in order to find a new test point and to replace one of the old test points with the new one, and so the technique progresses. The simplest approach is to replace the worst point with a point reflected through the centroid of the remaining n points. If this point is better than the best current point, then we can try stretching exponentially out along this line. On the other hand, if this new point isn't much better than the previous value, then we are stepping across a valley, so we shrink the simplex towards a better point. An intuitive explanation of the algorithm from "Numerical Recipes":

The downhill simplex method now takes a series of steps, most steps just moving the point of the simplex where the function is largest (“highest point”) through the opposite face of the simplex to a lower point. These steps are called reflections, and they are constructed to conserve the volume of the simplex (and hence maintain its nondegeneracy). When it can do so, the method expands the simplex in one or another direction to take larger steps. When it reaches a “valley floor”, the method contracts itself in the transverse direction and tries to ooze down the valley. If there is a situation where the simplex is trying to “pass through the eye of a needle”, it contracts itself in all directions, pulling itself in around its lowest (best) point.

Unlike modern optimization methods, the Nelder–Mead heuristic can converge to a non-stationary point, unless the problem satisfies stronger conditions than are necessary for modern methods. Modern improvements over the Nelder–Mead heuristic have been known since 1979.

Many variations exist depending on the actual nature of the problem being solved. A common variant uses a constant-size, small simplex that roughly follows the gradient direction (which gives steepest descent). Visualize a small triangle on an elevation map flip-flopping its way down a valley to a local bottom. This method is also known as the flexible polyhedron method. This, however, tends to perform poorly against the method described in this article because it makes small, unnecessary steps in areas of little interest.

One possible variation of the NM algorithm

(This approximates the procedure in the original Nelder–Mead article.)

Nelder–Mead method applied to the Rosenbrock function

We are trying to minimize the function f ( x ) {\displaystyle f(\mathbf {x} )} , where x R n {\displaystyle \mathbf {x} \in \mathbb {R} ^{n}} . Our current test points are x 1 , , x n + 1 {\displaystyle \mathbf {x} _{1},\ldots ,\mathbf {x} _{n+1}} .

  1. Order according to the values at the vertices:
    f ( x 1 ) f ( x 2 ) f ( x n + 1 ) . {\displaystyle f(\mathbf {x} _{1})\leq f(\mathbf {x} _{2})\leq \cdots \leq f(\mathbf {x} _{n+1}).}
    Check whether method should stop. See Termination (sometimes called "convergence").
  2. Calculate x o {\displaystyle \mathbf {x} _{o}} , the centroid of all points except x n + 1 {\displaystyle \mathbf {x} _{n+1}} .
  3. Reflection
    Compute reflected point x r = x o + α ( x o x n + 1 ) {\displaystyle \mathbf {x} _{r}=\mathbf {x} _{o}+\alpha (\mathbf {x} _{o}-\mathbf {x} _{n+1})} with α > 0 {\displaystyle \alpha >0} .
    If the reflected point is better than the second worst, but not better than the best, i.e. f ( x 1 ) f ( x r ) < f ( x n ) {\displaystyle f(\mathbf {x} _{1})\leq f(\mathbf {x} _{r})<f(\mathbf {x} _{n})} ,
    then obtain a new simplex by replacing the worst point x n + 1 {\displaystyle \mathbf {x} _{n+1}} with the reflected point x r {\displaystyle \mathbf {x} _{r}} , and go to step 1.
  4. Expansion
    If the reflected point is the best point so far, f ( x r ) < f ( x 1 ) {\displaystyle f(\mathbf {x} _{r})<f(\mathbf {x} _{1})} ,
    then compute the expanded point x e = x o + γ ( x r x o ) {\displaystyle \mathbf {x} _{e}=\mathbf {x} _{o}+\gamma (\mathbf {x} _{r}-\mathbf {x} _{o})} with γ > 1 {\displaystyle \gamma >1} .
    If the expanded point is better than the reflected point, f ( x e ) < f ( x r ) {\displaystyle f(\mathbf {x} _{e})<f(\mathbf {x} _{r})} ,
    then obtain a new simplex by replacing the worst point x n + 1 {\displaystyle \mathbf {x} _{n+1}} with the expanded point x e {\displaystyle \mathbf {x} _{e}} and go to step 1;
    else obtain a new simplex by replacing the worst point x n + 1 {\displaystyle \mathbf {x} _{n+1}} with the reflected point x r {\displaystyle \mathbf {x} _{r}} and go to step 1.
  5. Contraction
    Here it is certain that f ( x r ) f ( x n ) {\displaystyle f(\mathbf {x} _{r})\geq f(\mathbf {x} _{n})} . (Note that x n {\displaystyle \mathbf {x} _{n}} is second or "next" to the worst point.)
    If f ( x r ) < f ( x n + 1 ) {\displaystyle f(\mathbf {x} _{r})<f(\mathbf {x} _{n+1})} ,
    then compute the contracted point on the outside x c = x o + ρ ( x r x o ) {\displaystyle \mathbf {x} _{c}=\mathbf {x} _{o}+\rho (\mathbf {x} _{r}-\mathbf {x} _{o})} with 0 < ρ 0.5 {\displaystyle 0<\rho \leq 0.5} .
    If the contracted point is better than the reflected point, i.e. f ( x c ) < f ( x r ) {\displaystyle f(\mathbf {x} _{c})<f(\mathbf {x} _{r})} ,
    then obtain a new simplex by replacing the worst point x n + 1 {\displaystyle \mathbf {x} _{n+1}} with the contracted point x c {\displaystyle \mathbf {x} _{c}} and go to step 1;
    Else go to step 6;
    If f ( x r ) f ( x n + 1 ) {\displaystyle f(\mathbf {x} _{r})\geq f(\mathbf {x} _{n+1})} ,
    then compute the contracted point on the inside x c = x o + ρ ( x n + 1 x o ) {\displaystyle \mathbf {x} _{c}=\mathbf {x} _{o}+\rho (\mathbf {x} _{n+1}-\mathbf {x} _{o})} with 0 < ρ 0.5 {\displaystyle 0<\rho \leq 0.5} .
    If the contracted point is better than the worst point, i.e. f ( x c ) < f ( x n + 1 ) {\displaystyle f(\mathbf {x} _{c})<f(\mathbf {x} _{n+1})} ,
    then obtain a new simplex by replacing the worst point x n + 1 {\displaystyle \mathbf {x} _{n+1}} with the contracted point x c {\displaystyle \mathbf {x} _{c}} and go to step 1;
    Else go to step 6;
  6. Shrink
    Replace all points except the best ( x 1 {\displaystyle \mathbf {x} _{1}} ) with
    x i = x 1 + σ ( x i x 1 ) {\displaystyle \mathbf {x} _{i}=\mathbf {x} _{1}+\sigma (\mathbf {x} _{i}-\mathbf {x} _{1})} and go to step 1.

Note: α {\displaystyle \alpha } , γ {\displaystyle \gamma } , ρ {\displaystyle \rho } and σ {\displaystyle \sigma } are respectively the reflection, expansion, contraction and shrink coefficients. Standard values are α = 1 {\displaystyle \alpha =1} , γ = 2 {\displaystyle \gamma =2} , ρ = 1 / 2 {\displaystyle \rho =1/2} and σ = 1 / 2 {\displaystyle \sigma =1/2} .

For the reflection, since x n + 1 {\displaystyle \mathbf {x} _{n+1}} is the vertex with the higher associated value among the vertices, we can expect to find a lower value at the reflection of x n + 1 {\displaystyle \mathbf {x} _{n+1}} in the opposite face formed by all vertices x i {\displaystyle \mathbf {x} _{i}} except x n + 1 {\displaystyle \mathbf {x} _{n+1}} .

For the expansion, if the reflection point x r {\displaystyle \mathbf {x} _{r}} is the new minimum along the vertices, we can expect to find interesting values along the direction from x o {\displaystyle \mathbf {x} _{o}} to x r {\displaystyle \mathbf {x} _{r}} .

Concerning the contraction, if f ( x r ) > f ( x n ) {\displaystyle f(\mathbf {x} _{r})>f(\mathbf {x} _{n})} , we can expect that a better value will be inside the simplex formed by all the vertices x i {\displaystyle \mathbf {x} _{i}} .

Finally, the shrink handles the rare case that contracting away from the largest point increases f {\displaystyle f} , something that cannot happen sufficiently close to a non-singular minimum. In that case we contract towards the lowest point in the expectation of finding a simpler landscape. However, Nash notes that finite-precision arithmetic can sometimes fail to actually shrink the simplex, and implemented a check that the size is actually reduced.

Initial simplex

The initial simplex is important. Indeed, a too small initial simplex can lead to a local search, consequently the NM can get more easily stuck. So this simplex should depend on the nature of the problem. However, the original article suggested a simplex where an initial point is given as x 1 {\displaystyle \mathbf {x} _{1}} , with the others generated with a fixed step along each dimension in turn. Thus the method is sensitive to scaling of the variables that make up x {\displaystyle \mathbf {x} } .

Termination

Criteria are needed to break the iterative cycle. Nelder and Mead used the sample standard deviation of the function values of the current simplex. If these fall below some tolerance, then the cycle is stopped and the lowest point in the simplex returned as a proposed optimum. Note that a very "flat" function may have almost equal function values over a large domain, so that the solution will be sensitive to the tolerance. Nash adds the test for shrinkage as another termination criterion. Note that programs terminate, while iterations may converge.

See also

References

  1. ^
  2. ^
  3. Nelder, John A.; R. Mead (1965). "A simplex method for function minimization". Computer Journal. 7 (4): 308–313. doi:10.1093/comjnl/7.4.308.
  4. Spendley, W.; Hext, G. R.; Himsworth, F. R. (1962). "Sequential Application of Simplex Designs in Optimisation and Evolutionary Operation". Technometrics. 4 (4): 441–461. doi:10.1080/00401706.1962.10490033.
  5. Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Section 10.5. Downhill Simplex Method in Multidimensions". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
  6. ^ Nash, J. C. (1979). Compact Numerical Methods: Linear Algebra and Function Minimisation. Bristol: Adam Hilger. ISBN 978-0-85274-330-0.

Further reading

External links

Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions
Gradients
Convergence
Quasi–Newton
Other methods
Hessians
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
Constrained nonlinear
General
Differentiable
Convex optimization
Convex
minimization
Linear and
quadratic
Interior point
Basis-exchange
Combinatorial
Paradigms
Graph
algorithms
Minimum
spanning tree
Shortest path
Network flows
Metaheuristics
Category: