Misplaced Pages

Barrier function

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Barrier method (mathematics)) Continuous function whose value increases to infinity This article is about the topic in optimization. For the topic in dynamical systems, see Barrier certificate."Barrier method" redirects here. For the method of safer sex, see Barrier methods.

In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value increases to infinity as its argument approaches the boundary of the feasible region of an optimization problem. Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle. A barrier function is also called an interior penalty function, as it is a penalty function that forces the solution to remain within the interior of the feasible region.

The two most common types of barrier functions are inverse barrier functions and logarithmic barrier functions. Resumption of interest in logarithmic barrier functions was motivated by their connection with primal-dual interior point methods.

Motivation

Consider the following constrained optimization problem:

minimize f(x)
subject to xb

where b is some constant. If one wishes to remove the inequality constraint, the problem can be reformulated as

minimize f(x) + c(x),
where c(x) = ∞ if x > b, and zero otherwise.

This problem is equivalent to the first. It gets rid of the inequality, but introduces the issue that the penalty function c, and therefore the objective function f(x) + c(x), is discontinuous, preventing the use of calculus to solve it.

A barrier function, now, is a continuous approximation g to c that tends to infinity as x approaches b from above. Using such a function, a new optimization problem is formulated, viz.

minimize f(x) + μg(x)

where μ > 0 is a free parameter. This problem is not equivalent to the original, but as μ approaches zero, it becomes an ever-better approximation.

Logarithmic barrier function

For logarithmic barrier functions, g ( x , b ) {\displaystyle g(x,b)} is defined as log ( b x ) {\displaystyle -\log(b-x)} when x < b {\displaystyle x<b} and {\displaystyle \infty } otherwise (in one dimension; see below for a definition in higher dimensions). This essentially relies on the fact that log t {\displaystyle \log t} tends to negative infinity as t {\displaystyle t} tends to 0.

This introduces a gradient to the function being optimized which favors less extreme values of x {\displaystyle x} (in this case values lower than b {\displaystyle b} ), while having relatively low impact on the function away from these extremes.

Logarithmic barrier functions may be favored over less computationally expensive inverse barrier functions depending on the function being optimized.

Higher dimensions

Extending to higher dimensions is simple, provided each dimension is independent. For each variable x i {\displaystyle x_{i}} which should be limited to be strictly lower than b i {\displaystyle b_{i}} , add log ( b i x i ) {\displaystyle -\log(b_{i}-x_{i})} .

Formal definition

Minimize c T x {\displaystyle \mathbf {c} ^{T}x} subject to a i T x b i , i = 1 , , m {\displaystyle \mathbf {a} _{i}^{T}x\leq b_{i},i=1,\ldots ,m}

Assume strictly feasible: { x | A x < b } {\displaystyle \{\mathbf {x} |Ax<b\}\neq \emptyset }

Define logarithmic barrier g ( x ) = { i = 1 m log ( b i a i T x ) for  A x < b + otherwise {\displaystyle g(x)={\begin{cases}\sum _{i=1}^{m}-\log(b_{i}-a_{i}^{T}x)&{\text{for }}Ax<b\\+\infty &{\text{otherwise}}\end{cases}}}

See also

References

  1. Nesterov, Yurii (2018). Lectures on Convex Optimization (2 ed.). Cham, Switzerland: Springer. p. 56. ISBN 978-3-319-91577-7.
  2. Nocedal, Jorge; Wright, Stephen (2006). Numerical Optimization (2 ed.). New York, NY: Springer. p. 566. ISBN 0-387-30303-0.
  3. Vanderbei, Robert J. (2001). Linear Programming: Foundations and Extensions. Kluwer. pp. 277–279.

External links

Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions
Gradients
Convergence
Quasi–Newton
Other methods
Hessians
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
Constrained nonlinear
General
Differentiable
Convex optimization
Convex
minimization
Linear and
quadratic
Interior point
Basis-exchange
Combinatorial
Paradigms
Graph
algorithms
Minimum
spanning tree
Shortest path
Network flows
Metaheuristics
Categories: