Misplaced Pages

Golden-section search

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Golden section search) Technique for finding an extremum of a function
Diagram of a golden-section search. The initial triplet of x values is {x1x2x3}. If f(x4) = f4a, the triplet {x1x2x4} is chosen for the next iteration. If f(x4) = f4b, the triplet {x2x4x3} is chosen.
This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. Please help improve this article by introducing more precise citations. (June 2024) (Learn how and when to remove this message)

The golden-section search is a technique for finding an extremum (minimum or maximum) of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an interval containing multiple extrema (possibly including the interval boundaries), it will converge to one of them. If the only extremum on the interval is on a boundary of the interval, it will converge to that boundary point. The method operates by successively narrowing the range of values on the specified interval, which makes it relatively slow, but very robust. The technique derives its name from the fact that the algorithm maintains the function values for four points whose three interval widths are in the ratio φ:1:φ, where φ is the golden ratio. These ratios are maintained for each iteration and are maximally efficient. Excepting boundary points, when searching for a minimum, the central point is always less than or equal to the outer points, assuring that a minimum is contained between the outer points. The converse is true when searching for a maximum. The algorithm is the limit of Fibonacci search (also described below) for many function evaluations. Fibonacci search and golden-section search were discovered by Kiefer (1953) (see also Avriel and Wilde (1966)).

Basic idea

The discussion here is posed in terms of searching for a minimum (searching for a maximum is similar) of a unimodal function. Unlike finding a zero, where two function evaluations with opposite sign are sufficient to bracket a root, when searching for a minimum, three values are necessary. The golden-section search is an efficient way to progressively reduce the interval locating the minimum. The key is to observe that regardless of how many points have been evaluated, the minimum lies within the interval defined by the two points adjacent to the point with the least value so far evaluated.

The diagram above illustrates a single step in the technique for finding a minimum. The functional values of f ( x ) {\displaystyle f(x)} are on the vertical axis, and the horizontal axis is the x parameter. The value of f ( x ) {\displaystyle f(x)} has already been evaluated at the three points: x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} , and x 3 {\displaystyle x_{3}} . Since f 2 {\displaystyle f_{2}} is smaller than either f 1 {\displaystyle f_{1}} or f 3 {\displaystyle f_{3}} , it is clear that a minimum lies inside the interval from x 1 {\displaystyle x_{1}} to x 3 {\displaystyle x_{3}} .

The next step in the minimization process is to "probe" the function by evaluating it at a new value of x, namely x 4 {\displaystyle x_{4}} . It is most efficient to choose x 4 {\displaystyle x_{4}} somewhere inside the largest interval, i.e. between x 2 {\displaystyle x_{2}} and x 3 {\displaystyle x_{3}} . From the diagram, it is clear that if the function yields f 4 a > f ( x 2 ) {\displaystyle f_{4a}>f(x_{2})} , then a minimum lies between x 1 {\displaystyle x_{1}} and x 4 {\displaystyle x_{4}} , and the new triplet of points will be x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} , and x 4 {\displaystyle x_{4}} . However, if the function yields the value f 4 b < f ( x 2 ) {\displaystyle f_{4b}<f(x_{2})} , then a minimum lies between x 2 {\displaystyle x_{2}} and x 3 {\displaystyle x_{3}} , and the new triplet of points will be x 2 {\displaystyle x_{2}} , x 4 {\displaystyle x_{4}} , and x 3 {\displaystyle x_{3}} . Thus, in either case, we can construct a new narrower search interval that is guaranteed to contain the function's minimum.

Probe point selection

From the diagram above, it is seen that the new search interval will be either between x 1 {\displaystyle x_{1}} and x 4 {\displaystyle x_{4}} with a length of a + c, or between x 2 {\displaystyle x_{2}} and x 3 {\displaystyle x_{3}} with a length of b. The golden-section search requires that these intervals be equal. If they are not, a run of "bad luck" could lead to the wider interval being used many times, thus slowing down the rate of convergence. To ensure that b = a + c, the algorithm should choose x 4 = x 1 + ( x 3 x 2 ) {\displaystyle x_{4}=x_{1}+(x_{3}-x_{2})} .

However, there still remains the question of where x 2 {\displaystyle x_{2}} should be placed in relation to x 1 {\displaystyle x_{1}} and x 3 {\displaystyle x_{3}} . The golden-section search chooses the spacing between these points in such a way that these points have the same proportion of spacing as the subsequent triple x 1 , x 2 , x 4 {\displaystyle x_{1},x_{2},x_{4}} or x 2 , x 4 , x 3 {\displaystyle x_{2},x_{4},x_{3}} . By maintaining the same proportion of spacing throughout the algorithm, we avoid a situation in which x 2 {\displaystyle x_{2}} is very close to x 1 {\displaystyle x_{1}} or x 3 {\displaystyle x_{3}} and guarantee that the interval width shrinks by the same constant proportion in each step.

Mathematically, to ensure that the spacing after evaluating f ( x 4 ) {\displaystyle f(x_{4})} is proportional to the spacing prior to that evaluation, if f ( x 4 ) {\displaystyle f(x_{4})} is f 4 a {\displaystyle f_{4a}} and our new triplet of points is x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} , and x 4 {\displaystyle x_{4}} , then we want

c a = a b . {\displaystyle {\frac {c}{a}}={\frac {a}{b}}.}

However, if f ( x 4 ) {\displaystyle f(x_{4})} is f 4 b {\displaystyle f_{4b}} and our new triplet of points is x 2 {\displaystyle x_{2}} , x 4 {\displaystyle x_{4}} , and x 3 {\displaystyle x_{3}} , then we want

c b c = a b . {\displaystyle {\frac {c}{b-c}}={\frac {a}{b}}.}

Eliminating c from these two simultaneous equations yields

( b a ) 2 b a = 1 , {\displaystyle \left({\frac {b}{a}}\right)^{2}-{\frac {b}{a}}=1,}

or

b a = φ , {\displaystyle {\frac {b}{a}}=\varphi ,}

where φ is the golden ratio:

φ = 1 + 5 2 = 1.618033988 {\displaystyle \varphi ={\frac {1+{\sqrt {5}}}{2}}=1.618033988\ldots }

The appearance of the golden ratio in the proportional spacing of the evaluation points is how this search algorithm gets its name.

Termination condition

Any number of termination conditions may be applied, depending upon the application. The interval ΔX = X4X1 is a measure of the absolute error in the estimation of the minimum X and may be used to terminate the algorithm. The value of ΔX is reduced by a factor of r = φ − 1 for each iteration, so the number of iterations to reach an absolute error of ΔX is about ln(ΔXX0) / ln(r), where ΔX0 is the initial value of ΔX.

Because smooth functions are flat (their first derivative is close to zero) near a minimum, attention must be paid not to expect too great an accuracy in locating the minimum. The termination condition provided in the book Numerical Recipes in C is based on testing the gaps among x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} , x 3 {\displaystyle x_{3}} and x 4 {\displaystyle x_{4}} , terminating when within the relative accuracy bounds

| x 3 x 1 | < τ ( | x 2 | + | x 4 | ) , {\displaystyle |x_{3}-x_{1}|<\tau {\big (}|x_{2}|+|x_{4}|{\big )},}

where τ {\displaystyle \tau } is a tolerance parameter of the algorithm, and | x | {\displaystyle |x|} is the absolute value of x {\displaystyle x} . The check is based on the bracket size relative to its central value, because that relative error in x {\displaystyle x} is approximately proportional to the squared absolute error in f ( x ) {\displaystyle f(x)} in typical cases. For that same reason, the Numerical Recipes text recommends that τ = ε {\displaystyle \tau ={\sqrt {\varepsilon }}} , where ε {\displaystyle \varepsilon } is the required absolute precision of f ( x ) {\displaystyle f(x)} .

Algorithm

Note! The examples here describe an algorithm that is for finding the minimum of a function. For maximum, the comparison operators need to be reversed.

Iterative algorithm

Diagram of the golden section search for a minimum. The initial interval enclosed by X1 and X4 is divided into three intervals, and f is evaluated at each of the four Xi. The two intervals containing the minimum of f(Xi) are then selected, and a third interval and functional value are calculated, and the process is repeated until termination conditions are met. The three interval widths are always in the ratio c:cr:c where r = φ − 1 = 0.618... and c = 1 − r = 0.382..., φ being the golden ratio. This choice of interval ratios is the only one that allows the ratios to be maintained during an iteration.
  1. Specify the function to be minimized, ⁠ f ( x ) {\displaystyle f(x)} ⁠, the interval to be searched as {X1,X4}, and their functional values F1 and F4.
  2. Calculate an interior point and its functional value F2. The two interval lengths are in the ratio c : r or r : c where r = φ − 1; and c = 1 − r, with φ being the golden ratio.
  3. Using the triplet, determine if convergence criteria are fulfilled. If they are, estimate the X at the minimum from that triplet and return.
  4. From the triplet, calculate the other interior point and its functional value. The three intervals will be in the ratio ⁠ c : c r : c {\displaystyle c:cr:c} ⁠.
  5. The three points for the next iteration will be the one where F is a minimum, and the two points closest to it in X.
  6. Go to step 3.
"""
Python program for golden section search.  This implementation
does not reuse function evaluations and assumes the minimum is c
or d (not on the edges at a or b)
"""
import math
invphi = (math.sqrt(5) - 1) / 2  # 1 / phi
def gss(f, a, b, tolerance=1e-5):
    """
    Golden-section search
    to find the minimum of f on 
    * f: a strictly unimodal function on 
    Example:
    >>> def f(x): return (x - 2) ** 2
    >>> x = gss(f, 1, 5)
    >>> print(f"{x:.5f}")
    2.00000
    """
    while b - a > tolerance:
        c = b - (b - a) * invphi
        d = a + (b - a) * invphi
        if f(c) < f(d):
            b = d
        else:  # f(c) > f(d) to find the maximum
            a = c
    return (b + a) / 2
// a and c define range to search
// func(x) returns value of function at x to be minimized
function goldenSection(a, c, func) {
  function split(x1, x2) { return x1 + 0.6180339887498949*(x2-x1); }
  var b = split(a, c);
  var bv = func(b);
  while (a != c) {
    var x = split(a, b);
    var xv = func(x);
    if (xv < bv) {
      bv = xv;
      c = b;
      b = x;
    }
    else {
      a = c;
      c = x;
    }
  }
  return b;
}
function test(x) { return -Math.sin(x); }
console.log(goldenSection(0, 3, test)); // prints PI/2
"""
Python program for golden section search.  This implementation
does not reuse function evaluations and assumes the minimum is c
or d (not on the edges at a or b)
"""
import math
invphi = (math.sqrt(5) - 1) / 2  # 1 / phi
invphi2 = (3 - math.sqrt(5)) / 2  # 1 / phi^2
def gss(f, a, b, tolerance=1e-5):
    """
    Golden-section search.
    Given a function f with a single local minimum in
    the interval , gss returns a subset interval
     that contains the minimum with d-c <= tolerance.
    Example:
    >>> def f(x): return (x - 2) ** 2
    >>> print(*gss(f, a=1, b=5, tolerance=1e-5))
    1.9999959837979107 2.0000050911830893
    """
    a, b = min(a, b), max(a, b)
    h = b - a
    if h <= tolerance:
        return (a, b)
    # Required steps to achieve tolerance
    n = int(math.ceil(math.log(tolerance / h) / math.log(invphi)))
    c, d = a + invphi2 * h, a + invphi * h
    yc, yd = f(c), f(d)
    for _ in range(n - 1):
        h *= invphi
        if yc < yd:
            b, d = d, c
            yd = yc
            c = a + invphi2 * h
            yc = f(c)
        else:  # yc > yd to find the maximum
            a, c = c, d
            yc = yd
            d = a + invphi * h
            yd = f(d)
    return (a, d) if yc < yd else (c, b)

Recursive algorithm

public class GoldenSectionSearch {
    public static final double invphi = (Math.sqrt(5.0) - 1) / 2.0;
    public static final double invphi2 = (3 - Math.sqrt(5.0)) / 2.0;
    public interface Function {
        double of(double x);
    }
    // Returns subinterval of  containing minimum of f
    public static double gss(Function f, double a, double b, double tol) {
        return gss(f, a, b, tol, b - a, true, 0, 0, true, 0, 0);
    }
    private static double gss(Function f, double a, double b, double tol,
                                double h, boolean noC, double c, double fc,
                                boolean noD, double d, double fd) {
        if (Math.abs(h) <= tol) {
            return new double { a, b };
        }
        if (noC) {
            c = a + invphi2 * h;
            fc = f.of(c);
        }
        if (noD) {
            d = a + invphi * h;
            fd = f.of(d);
        }
        if (fc < fd) {  // fc > fd to find the maximum
            return gss(f, a, d, tol, h * invphi, true, 0, 0, false, c, fc);
        } else {
            return gss(f, c, b, tol, h * invphi, false, d, fd, true, 0, 0);
        }
    }
    public static void main(String args) {
        Function f = (x)->Math.pow(x-2, 2);
        double a = 1;
        double b = 5;
        double tol = 1e-5;
        double  ans = gss(f, a, b, tol);
        System.out.println(" + "," + ans + "]");
        // 
    }
}
import math
invphi = (math.sqrt(5) - 1) / 2  # 1 / phi
invphi2 = (3 - math.sqrt(5)) / 2  # 1 / phi^2
def gssrec(f, a, b, tol=1e-5, h=None, c=None, d=None, fc=None, fd=None):
    """Golden-section search, recursive.
    Given a function f with a single local minimum in
    the interval , gss returns a subset interval
     that contains the minimum with d-c <= tol.
    Example:
    >>> f = lambda x: (x - 2) ** 2
    >>> a = 1
    >>> b = 5
    >>> tol = 1e-5
    >>> (c, d) = gssrec(f, a, b, tol)
    >>> print (c, d)
    1.9999959837979107 2.0000050911830893
    """
    (a, b) = (min(a, b), max(a, b))
    if h is None:
        h = b - a
    if h <= tol:
        return (a, b)
    if c is None:
        c = a + invphi2 * h
    if d is None:
        d = a + invphi * h
    if fc is None:
        fc = f(c)
    if fd is None:
        fd = f(d)
    if fc < fd:  # fc > fd to find the maximum
        return gssrec(f, a, d, tol, h * invphi, c=None, fc=None, d=c, fd=fc)
    else:
        return gssrec(f, c, b, tol, h * invphi, c=d, fc=fd, d=None, fd=None)

Related algorithms

Fibonacci search

Main article: Fibonacci search technique

A very similar algorithm can also be used to find the extremum (minimum or maximum) of a sequence of values that has a single local minimum or local maximum. In order to approximate the probe positions of golden section search while probing only integer sequence indices, the variant of the algorithm for this case typically maintains a bracketing of the solution in which the length of the bracketed interval is a Fibonacci number. For this reason, the sequence variant of golden section search is often called Fibonacci search.

Fibonacci search was first devised by Kiefer (1953) as a minimax search for the maximum (minimum) of a unimodal function in an interval.

Bisection method

The Bisection method is a similar algorithm for finding a zero of a function. Note that, for bracketing a zero, only two points are needed, rather than three. The interval ratio decreases by 2 in each step, rather than by the golden ratio.

See also

References

Metallic means
Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions
Gradients
Convergence
Quasi–Newton
Other methods
Hessians
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
Constrained nonlinear
General
Differentiable
Convex optimization
Convex
minimization
Linear and
quadratic
Interior point
Basis-exchange
Combinatorial
Paradigms
Graph
algorithms
Minimum
spanning tree
Shortest path
Network flows
Metaheuristics
Categories: