Misplaced Pages

Interval arithmetic

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from ACRITH-XSC) Method for bounding the errors of numerical computations
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Interval arithmetic" – news · newspapers · books · scholar · JSTOR (January 2020) (Learn how and when to remove this message)
This article's tone or style may not reflect the encyclopedic tone used on Misplaced Pages. See Misplaced Pages's guide to writing better articles for suggestions. (February 2020) (Learn how and when to remove this message)
(Learn how and when to remove this message)

Tolerance function (turquoise) and interval-valued approximation (red)

Interval arithmetic (also known as interval mathematics; interval analysis or interval computation) is a mathematical technique used to mitigate rounding and measurement errors in mathematical computation by computing function bounds. Numerical methods involving interval arithmetic can guarantee relatively reliable and mathematically correct results. Instead of representing a value as a single number, interval arithmetic or interval mathematics represents each value as a range of possibilities.

Mathematically, instead of working with an uncertain real-valued variable x {\displaystyle x} , interval arithmetic works with an interval [ a , b ] {\displaystyle } that defines the range of values that x {\displaystyle x} can have. In other words, any value of the variable x {\displaystyle x} lies in the closed interval between a {\displaystyle a} and b {\displaystyle b} . A function f {\displaystyle f} , when applied to x {\displaystyle x} , produces an interval [ c , d ] {\displaystyle } which includes all the possible values for f ( x ) {\displaystyle f(x)} for all x [ a , b ] {\displaystyle x\in } .

Interval arithmetic is suitable for a variety of purposes; the most common use is in scientific works, particularly when the calculations are handled by software, where it is used to keep track of rounding errors in calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such as differential equations) and optimization problems.

Introduction

The main objective of interval arithmetic is to provide a simple way of calculating upper and lower bounds of a function's range in one or more variables. These endpoints are not necessarily the true supremum or infimum of a range since the precise calculation of those values can be difficult or impossible; the bounds only need to contain the function's range as a subset.

This treatment is typically limited to real intervals, so quantities in the form

[ a , b ] = { x R a x b } , {\displaystyle =\{x\in \mathbb {R} \mid a\leq x\leq b\},}

where a = {\displaystyle a={-\infty }} and b = {\displaystyle b={\infty }} are allowed. With one of a {\displaystyle a} , b {\displaystyle b} infinite, the interval would be an unbounded interval; with both infinite, the interval would be the extended real number line. Since a real number r {\displaystyle r} can be interpreted as the interval [ r , r ] , {\displaystyle ,} intervals and real numbers can be freely combined.

Example

Body mass index for a person 1.80 m tall in relation to body weight m (in kilograms)

Consider the calculation of a person's body mass index (BMI). BMI is calculated as a person's body weight in kilograms divided by the square of their height in meters. Suppose a person uses a scale that has a precision of one kilogram, where intermediate values cannot be discerned, and the true weight is rounded to the nearest whole number. For example, 79.6 kg and 80.3 kg are indistinguishable, as the scale can only display values to the nearest kilogram. It is unlikely that when the scale reads 80 kg, the person has a weight of exactly 80.0 kg. Thus, the scale displaying 80 kg indicates a weight between 79.5 kg and 80.5 kg, or the interval [ 79.5 , 80.5 ) {\displaystyle [79.5,80.5)} .

The BMI of a man who weighs 80 kg and is 1.80m tall is approximately 24.7. A weight of 79.5 kg and the same height yields a BMI of 24.537, while a weight of 80.5 kg yields 24.846. Since the body mass is continuous and always increasing for all values within the specified weight interval, the true BMI must lie within the interval [ 24.537 , 24.846 ] {\displaystyle } . Since the entire interval is less than 25, which is the cutoff between normal and excessive weight, it can be concluded with certainty that the man is of normal weight.

The error in this example does not affect the conclusion (normal weight), but this is not generally true. If the man were slightly heavier, the BMI's range may include the cutoff value of 25. In such a case, the scale's precision would be insufficient to make a definitive conclusion.

The range of BMI examples could be reported as [ 24.5 , 24.9 ] {\displaystyle } since this interval is a superset of the calculated interval. The range could not, however, be reported as [ 24.6 , 24.8 ] {\displaystyle } , as the interval does not contain possible BMI values.

Multiple intervals

Body mass index for different weights in relation to height L (in meters)

Height and body weight both affect the value of the BMI. Though the example above only considered variation in weight, height is also subject to uncertainty. Height measurements in meters are usually rounded to the nearest centimeter: a recorded measurement of 1.79 meters represents a height in the interval [ 1.785 , 1.795 ) {\displaystyle [1.785,1.795)} . Since the BMI uniformly increases with respect to weight and decreases with respect to height, the error interval can be calculated by substituting the lowest and highest values of each interval, and then selecting the lowest and highest results as boundaries. The BMI must therefore exist in the interval

[ 79.5 , 80.5 ) [ 1.785 , 1.795 ) 2 [ 24.673 , 25.266 ] . {\displaystyle {\frac {.}

In this case, the man may have normal weight or be overweight; the weight and height measurements were insufficiently precise to make a definitive conclusion.

Interval operators

A binary operation {\displaystyle \star } on two intervals, such as addition or multiplication is defined by

[ x 1 , x 2 ] [ y 1 , y 2 ] = { x y | x [ x 1 , x 2 ] y [ y 1 , y 2 ] } . {\displaystyle {\,\star \,}=\{x\star y\,|\,x\in \,\land \,y\in \}.}

In other words, it is the set of all possible values of x y {\displaystyle x\star y} , where x {\displaystyle x} and y {\displaystyle y} are in their corresponding intervals. If {\displaystyle \star } is monotone for each operand on the intervals, which is the case for the four basic arithmetic operations (except division when the denominator contains 0 {\displaystyle 0} ), the extreme values occur at the endpoints of the operand intervals. Writing out all combinations, one way of stating this is

[ x 1 , x 2 ] [ y 1 , y 2 ] = [ min { x 1 y 1 , x 1 y 2 , x 2 y 1 , x 2 y 2 } , max { x 1 y 1 , x 1 y 2 , x 2 y 1 , x 2 y 2 } ] , {\displaystyle \star =\left,}

provided that x y {\displaystyle x\star y} is defined for all x [ x 1 , x 2 ] {\displaystyle x\in } and y [ y 1 , y 2 ] {\displaystyle y\in } .

For practical applications, this can be simplified further:

  • Addition: [ x 1 , x 2 ] + [ y 1 , y 2 ] = [ x 1 + y 1 , x 2 + y 2 ] {\displaystyle +=}
  • Subtraction: [ x 1 , x 2 ] [ y 1 , y 2 ] = [ x 1 y 2 , x 2 y 1 ] {\displaystyle -=}
  • Multiplication: [ x 1 , x 2 ] [ y 1 , y 2 ] = [ min { x 1 y 1 , x 1 y 2 , x 2 y 1 , x 2 y 2 } , max { x 1 y 1 , x 1 y 2 , x 2 y 1 , x 2 y 2 } ] {\displaystyle \cdot =}
  • Division: [ x 1 , x 2 ] [ y 1 , y 2 ] = [ x 1 , x 2 ] 1 [ y 1 , y 2 ] , {\displaystyle {\frac {}{}}=\cdot {\frac {1}{}},} where 1 [ y 1 , y 2 ] = [ 1 y 2 , 1 y 1 ] if 0 [ y 1 , y 2 ] 1 [ y 1 , 0 ] = [ , 1 y 1 ] 1 [ 0 , y 2 ] = [ 1 y 2 , ] 1 [ y 1 , y 2 ] = [ , 1 y 1 ] [ 1 y 2 , ] [ , ] if 0 ( y 1 , y 2 ) {\displaystyle {\begin{aligned}{\frac {1}{}}&=\left&&{\textrm {if}}\;0\notin \\{\frac {1}{}}&=\left\\{\frac {1}{}}&=\left\\{\frac {1}{}}&=\left\cup \left\subseteq &&{\textrm {if}}\;0\in (y_{1},y_{2})\end{aligned}}}

The last case loses useful information about the exclusion of ( 1 / y 1 , 1 / y 2 ) {\displaystyle (1/y_{1},1/y_{2})} . Thus, it is common to work with [ , 1 y 1 ] {\displaystyle \left} and [ 1 y 2 , ] {\displaystyle \left} as separate intervals. More generally, when working with discontinuous functions, it is sometimes useful to do the calculation with so-called multi-intervals of the form i [ a i , b i ] . {\textstyle \bigcup _{i}\left.} The corresponding multi-interval arithmetic maintains a set of (usually disjoint) intervals and also provides for overlapping intervals to unite.

Multiplication of positive intervals

Interval multiplication often only requires two multiplications. If x 1 {\displaystyle x_{1}} , y 1 {\displaystyle y_{1}} are nonnegative,

[ x 1 , x 2 ] [ y 1 , y 2 ] = [ x 1 y 1 , x 2 y 2 ] ,  if  x 1 , y 1 0. {\displaystyle \cdot =,\qquad {\text{ if }}x_{1},y_{1}\geq 0.}

The multiplication can be interpreted as the area of a rectangle with varying edges. The result interval covers all possible areas, from the smallest to the largest.

With the help of these definitions, it is already possible to calculate the range of simple functions, such as f ( a , b , x ) = a x + b . {\displaystyle f(a,b,x)=a\cdot x+b.} For example, if a = [ 1 , 2 ] {\displaystyle a=} , b = [ 5 , 7 ] {\displaystyle b=} and x = [ 2 , 3 ] {\displaystyle x=} :

f ( a , b , x ) = ( [ 1 , 2 ] [ 2 , 3 ] ) + [ 5 , 7 ] = [ 1 2 , 2 3 ] + [ 5 , 7 ] = [ 7 , 13 ] . {\displaystyle f(a,b,x)=(\cdot )+=+=.}

Notation

To shorten the notation of intervals, brackets can be used.

[ x ] [ x 1 , x 2 ] {\displaystyle \equiv } can be used to represent an interval. Note that in such a compact notation, [ x ] {\displaystyle } should not be confused between a single-point interval [ x 1 , x 1 ] {\displaystyle } and a general interval. For the set of all intervals, we can use

[ R ] := { [ x 1 , x 2 ] | x 1 x 2  and  x 1 , x 2 R { , } } {\displaystyle :=\left\{\,\,|\,x_{1}\leq x_{2}{\text{ and }}x_{1},x_{2}\in \mathbb {R} \cup \{-\infty ,\infty \}\right\}}

as an abbreviation. For a vector of intervals ( [ x ] 1 , , [ x ] n ) [ R ] n {\displaystyle \left(_{1},\ldots ,_{n}\right)\in ^{n}} we can use a bold font: [ x ] {\displaystyle } .

Elementary functions

Values of a monotonic function

Interval functions beyond the four basic operators may also be defined.

For monotonic functions in one variable, the range of values is simple to compute. If f : R R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is monotonically increasing (resp. decreasing) in the interval [ x 1 , x 2 ] , {\displaystyle ,} then for all y 1 , y 2 [ x 1 , x 2 ] {\displaystyle y_{1},y_{2}\in } such that y 1 < y 2 , {\displaystyle y_{1}<y_{2},} f ( y 1 ) f ( y 2 ) {\displaystyle f(y_{1})\leq f(y_{2})} (resp. f ( y 2 ) f ( y 1 ) {\displaystyle f(y_{2})\leq f(y_{1})} ).

The range corresponding to the interval [ y 1 , y 2 ] [ x 1 , x 2 ] {\displaystyle \subseteq } can be therefore calculated by applying the function to its endpoints:

f ( [ y 1 , y 2 ] ) = [ min { f ( y 1 ) , f ( y 2 ) } , max { f ( y 1 ) , f ( y 2 ) } ] . {\displaystyle f()=\left.}

From this, the following basic features for interval functions can easily be defined:

  • Exponential function: a [ x 1 , x 2 ] = [ a x 1 , a x 2 ] {\displaystyle a^{}=} for a > 1 , {\displaystyle a>1,}
  • Logarithm: log a [ x 1 , x 2 ] = [ log a x 1 , log a x 2 ] {\displaystyle \log _{a}=} for positive intervals [ x 1 , x 2 ] {\displaystyle } and a > 1 , {\displaystyle a>1,}
  • Odd powers: [ x 1 , x 2 ] n = [ x 1 n , x 2 n ] {\displaystyle ^{n}=} , for odd n N . {\displaystyle n\in \mathbb {N} .}

For even powers, the range of values being considered is important and needs to be dealt with before doing any multiplication. For example, x n {\displaystyle x^{n}} for x [ 1 , 1 ] {\displaystyle x\in } should produce the interval [ 0 , 1 ] {\displaystyle } when n = 2 , 4 , 6 , . {\displaystyle n=2,4,6,\ldots .} But if [ 1 , 1 ] n {\displaystyle ^{n}} is taken by repeating interval multiplication of form [ 1 , 1 ] [ 1 , 1 ] [ 1 , 1 ] {\displaystyle \cdot \cdot \cdots \cdot } then the result is [ 1 , 1 ] , {\displaystyle ,} wider than necessary.

More generally one can say that, for piecewise monotonic functions, it is sufficient to consider the endpoints x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} of an interval, together with the so-called critical points within the interval, being those points where the monotonicity of the function changes direction. For the sine and cosine functions, the critical points are at ( 1 2 + n ) π {\displaystyle \left({\tfrac {1}{2}}+n\right)\pi } or n π {\displaystyle n\pi } for n Z {\displaystyle n\in \mathbb {Z} } , respectively. Thus, only up to five points within an interval need to be considered, as the resulting interval is [ 1 , 1 ] {\displaystyle } if the interval includes at least two extrema. For sine and cosine, only the endpoints need full evaluation, as the critical points lead to easily pre-calculated values—namely -1, 0, and 1.

Interval extensions of general functions

In general, it may not be easy to find such a simple description of the output interval for many functions. But it may still be possible to extend functions to interval arithmetic. If f : R n R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is a function from a real vector to a real number, then [ f ] : [ R ] n [ R ] {\displaystyle :^{n}\to } is called an interval extension of f {\displaystyle f} if

[ f ] ( [ x ] ) { f ( y ) y [ x ] } . {\displaystyle ()\supseteq \{f(\mathbf {y} )\mid \mathbf {y} \in \}.}

This definition of the interval extension does not give a precise result. For example, both [ f ] ( [ x 1 , x 2 ] ) = [ e x 1 , e x 2 ] {\displaystyle ()=} and [ g ] ( [ x 1 , x 2 ] ) = [ , ] {\displaystyle ()=} are allowable extensions of the exponential function. Tighter extensions are desirable, though the relative costs of calculation and imprecision should be considered; in this case, [ f ] {\displaystyle } should be chosen as it gives the tightest possible result.

Given a real expression, its natural interval extension is achieved by using the interval extensions of each of its subexpressions, functions, and operators.

The Taylor interval extension (of degree k {\displaystyle k} ) is a k + 1 {\displaystyle k+1} times differentiable function f {\displaystyle f} defined by

[ f ] ( [ x ] ) := f ( y ) + i = 1 k 1 i ! D i f ( y ) ( [ x ] y ) i + [ r ] ( [ x ] , [ x ] , y ) , {\displaystyle ():=f(\mathbf {y} )+\sum _{i=1}^{k}{\frac {1}{i!}}\mathrm {D} ^{i}f(\mathbf {y} )\cdot (-\mathbf {y} )^{i}+(,,\mathbf {y} ),}

for some y [ x ] {\displaystyle \mathbf {y} \in } , where D i f ( y ) {\displaystyle \mathrm {D} ^{i}f(\mathbf {y} )} is the i {\displaystyle i} -th order differential of f {\displaystyle f} at the point y {\displaystyle \mathbf {y} } and [ r ] {\displaystyle } is an interval extension of the Taylor remainder.

r ( x , ξ , y ) = 1 ( k + 1 ) ! D k + 1 f ( ξ ) ( x y ) k + 1 . {\displaystyle r(\mathbf {x} ,\xi ,\mathbf {y} )={\frac {1}{(k+1)!}}\mathrm {D} ^{k+1}f(\xi )\cdot (\mathbf {x} -\mathbf {y} )^{k+1}.}
Mean value form

The vector ξ {\displaystyle \xi } lies between x {\displaystyle \mathbf {x} } and y {\displaystyle \mathbf {y} } with x , y [ x ] {\displaystyle \mathbf {x} ,\mathbf {y} \in } , ξ {\displaystyle \xi } is protected by [ x ] {\displaystyle } . Usually one chooses y {\displaystyle \mathbf {y} } to be the midpoint of the interval and uses the natural interval extension to assess the remainder.

The special case of the Taylor interval extension of degree k = 0 {\displaystyle k=0} is also referred to as the mean value form.

Complex interval arithmetic

An interval can be defined as a set of points within a specified distance of the center, and this definition can be extended from real numbers to complex numbers. Another extension defines intervals as rectangles in the complex plane. As is the case with computing with real numbers, computing with complex numbers involves uncertain data. So, given the fact that an interval number is a real closed interval and a complex number is an ordered pair of real numbers, there is no reason to limit the application of interval arithmetic to the measure of uncertainties in computations with real numbers. Interval arithmetic can thus be extended, via complex interval numbers, to determine regions of uncertainty in computing with complex numbers. One can either define complex interval arithmetic using rectangles or using disks, both with their respective advantages and disadvantages.

The basic algebraic operations for real interval numbers (real closed intervals) can be extended to complex numbers. It is therefore not surprising that complex interval arithmetic is similar to, but not the same as, ordinary complex arithmetic. It can be shown that, as is the case with real interval arithmetic, there is no distributivity between the addition and multiplication of complex interval numbers except for certain special cases, and inverse elements do not always exist for complex interval numbers. Two other useful properties of ordinary complex arithmetic fail to hold in complex interval arithmetic: the additive and multiplicative properties, of ordinary complex conjugates, do not hold for complex interval conjugates.

Interval arithmetic can be extended, in an analogous manner, to other multidimensional number systems such as quaternions and octonions, but with the expense that we have to sacrifice other useful properties of ordinary arithmetic.

Interval methods

This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources in this section. Unsourced material may be challenged and removed. (February 2018) (Learn how and when to remove this message)

The methods of classical numerical analysis cannot be transferred one-to-one into interval-valued algorithms, as dependencies between numerical values are usually not taken into account.

Rounded interval arithmetic

Outer bounds at different level of rounding

To work effectively in a real-life implementation, intervals must be compatible with floating point computing. The earlier operations were based on exact arithmetic, but in general fast numerical solution methods may not be available for it. The range of values of the function f ( x , y ) = x + y {\displaystyle f(x,y)=x+y} for x [ 0.1 , 0.8 ] {\displaystyle x\in } and y [ 0.06 , 0.08 ] {\displaystyle y\in } are for example [ 0.16 , 0.88 ] {\displaystyle } . Where the same calculation is done with single-digit precision, the result would normally be [ 0.2 , 0.9 ] {\displaystyle } . But [ 0.2 , 0.9 ] [ 0.16 , 0.88 ] {\displaystyle \not \supseteq } , so this approach would contradict the basic principles of interval arithmetic, as a part of the domain of f ( [ 0.1 , 0.8 ] , [ 0.06 , 0.08 ] ) {\displaystyle f(,)} would be lost. Instead, the outward rounded solution [ 0.1 , 0.9 ] {\displaystyle } is used.

The standard IEEE 754 for binary floating-point arithmetic also sets out procedures for the implementation of rounding. An IEEE 754 compliant system allows programmers to round to the nearest floating-point number; alternatives are rounding towards 0 (truncating), rounding toward positive infinity (i.e., up), or rounding towards negative infinity (i.e., down).

The required external rounding for interval arithmetic can thus be achieved by changing the rounding settings of the processor in the calculation of the upper limit (up) and lower limit (down). Alternatively, an appropriate small interval [ ε 1 , ε 2 ] {\displaystyle } can be added.

Dependency problem

Approximate estimate of the value range

The so-called "dependency" problem is a major obstacle to the application of interval arithmetic. Although interval methods can determine the range of elementary arithmetic operations and functions very accurately, this is not always true with more complicated functions. If an interval occurs several times in a calculation using parameters, and each occurrence is taken independently, then this can lead to an unwanted expansion of the resulting intervals.

Treating each occurrence of a variable independently

As an illustration, take the function f {\displaystyle f} defined by f ( x ) = x 2 + x . {\displaystyle f(x)=x^{2}+x.} The values of this function over the interval [ 1 , 1 ] {\displaystyle } are [ 1 4 , 2 ] . {\displaystyle \left.} As the natural interval extension, it is calculated as:

[ 1 , 1 ] 2 + [ 1 , 1 ] = [ 0 , 1 ] + [ 1 , 1 ] = [ 1 , 2 ] , {\displaystyle ^{2}+=+=,}

which is slightly larger; we have instead calculated the infimum and supremum of the function h ( x , y ) = x 2 + y {\displaystyle h(x,y)=x^{2}+y} over x , y [ 1 , 1 ] . {\displaystyle x,y\in .} There is a better expression of f {\displaystyle f} in which the variable x {\displaystyle x} only appears once, namely by rewriting f ( x ) = x 2 + x {\displaystyle f(x)=x^{2}+x} as addition and squaring in the quadratic.

f ( x ) = ( x + 1 2 ) 2 1 4 . {\displaystyle f(x)=\left(x+{\frac {1}{2}}\right)^{2}-{\frac {1}{4}}.}

So the suitable interval calculation is

( [ 1 , 1 ] + 1 2 ) 2 1 4 = [ 1 2 , 3 2 ] 2 1 4 = [ 0 , 9 4 ] 1 4 = [ 1 4 , 2 ] {\displaystyle \left(+{\frac {1}{2}}\right)^{2}-{\frac {1}{4}}=\left^{2}-{\frac {1}{4}}=\left-{\frac {1}{4}}=\left}

and gives the correct values.

In general, it can be shown that the exact range of values can be achieved, if each variable appears only once and if f {\displaystyle f} is continuous inside the box. However, not every function can be rewritten this way.

Wrapping effect

The dependency of the problem causing over-estimation of the value range can go as far as covering a large range, preventing more meaningful conclusions.

An additional increase in the range stems from the solution of areas that do not take the form of an interval vector. The solution set of the linear system

{ x = p y = p p [ 1 , 1 ] {\displaystyle {\begin{cases}x=p\\y=p\end{cases}}\qquad p\in }

is precisely the line between the points ( 1 , 1 ) {\displaystyle (-1,-1)} and ( 1 , 1 ) . {\displaystyle (1,1).} Using interval methods results in the unit square, [ 1 , 1 ] × [ 1 , 1 ] . {\displaystyle \times .} This is known as the wrapping effect.

Linear interval systems

A linear interval system consists of a matrix interval extension [ A ] [ R ] n × m {\displaystyle \in ^{n\times m}} and an interval vector [ b ] [ R ] n {\displaystyle \in ^{n}} . We want the smallest cuboid [ x ] [ R ] m {\displaystyle \in ^{m}} , for all vectors x R m {\displaystyle \mathbf {x} \in \mathbb {R} ^{m}} which there is a pair ( A , b ) {\displaystyle (\mathbf {A} ,\mathbf {b} )} with A [ A ] {\displaystyle \mathbf {A} \in } and b [ b ] {\displaystyle \mathbf {b} \in } satisfying.

A x = b {\displaystyle \mathbf {A} \cdot \mathbf {x} =\mathbf {b} } .

For quadratic systems – in other words, for n = m {\displaystyle n=m} – there can be such an interval vector [ x ] {\displaystyle } , which covers all possible solutions, found simply with the interval Gauss method. This replaces the numerical operations, in that the linear algebra method known as Gaussian elimination becomes its interval version. However, since this method uses the interval entities [ A ] {\displaystyle } and [ b ] {\displaystyle } repeatedly in the calculation, it can produce poor results for some problems. Hence using the result of the interval-valued Gauss only provides first rough estimates, since although it contains the entire solution set, it also has a large area outside it.

A rough solution [ x ] {\displaystyle } can often be improved by an interval version of the Gauss–Seidel method. The motivation for this is that the i {\displaystyle i} -th row of the interval extension of the linear equation.

( [ a 11 ] [ a 1 n ] [ a n 1 ] [ a n n ] ) ( x 1 x n ) = ( [ b 1 ] [ b n ] ) {\displaystyle {\begin{pmatrix}{}&\cdots &{}\\\vdots &\ddots &\vdots \\{}&\cdots &{}\end{pmatrix}}\cdot {\begin{pmatrix}{x_{1}}\\\vdots \\{x_{n}}\end{pmatrix}}={\begin{pmatrix}{}\\\vdots \\{}\end{pmatrix}}}

can be determined by the variable x i {\displaystyle x_{i}} if the division 1 / [ a i i ] {\displaystyle 1/} is allowed. It is therefore simultaneously.

x j [ x j ] {\displaystyle x_{j}\in } and x j [ b i ] k j [ a i k ] [ x k ] [ a i i ] {\displaystyle x_{j}\in {\frac {-\sum \limits _{k\not =j}\cdot }{}}} .

So we can now replace [ x j ] {\displaystyle } by

[ x j ] [ b i ] k j [ a i k ] [ x k ] [ a i i ] {\displaystyle \cap {\frac {-\sum \limits _{k\not =j}\cdot }{}}} ,

and so the vector [ x ] {\displaystyle } by each element. Since the procedure is more efficient for a diagonally dominant matrix, instead of the system [ A ] x = [ b ] , {\displaystyle \cdot \mathbf {x} ={\mbox{,}}} one can often try multiplying it by an appropriate rational matrix M {\displaystyle \mathbf {M} } with the resulting matrix equation.

( M [ A ] ) x = M [ b ] {\displaystyle (\mathbf {M} \cdot )\cdot \mathbf {x} =\mathbf {M} \cdot }

left to solve. If one chooses, for example, M = A 1 {\displaystyle \mathbf {M} =\mathbf {A} ^{-1}} for the central matrix A [ A ] {\displaystyle \mathbf {A} \in } , then M [ A ] {\displaystyle \mathbf {M} \cdot } is outer extension of the identity matrix.

These methods only work well if the widths of the intervals occurring are sufficiently small. For wider intervals, it can be useful to use an interval-linear system on finite (albeit large) real number equivalent linear systems. If all the matrices A [ A ] {\displaystyle \mathbf {A} \in } are invertible, it is sufficient to consider all possible combinations (upper and lower) of the endpoints occurring in the intervals. The resulting problems can be resolved using conventional numerical methods. Interval arithmetic is still used to determine rounding errors.

This is only suitable for systems of smaller dimension, since with a fully occupied n × n {\displaystyle n\times n} matrix, 2 n 2 {\displaystyle 2^{n^{2}}} real matrices need to be inverted, with 2 n {\displaystyle 2^{n}} vectors for the right-hand side. This approach was developed by Jiri Rohn and is still being developed.

Interval Newton method

Reduction of the search area in the interval Newton step in "thick" functions.

An interval variant of Newton's method for finding the zeros in an interval vector [ x ] {\displaystyle } can be derived from the average value extension. For an unknown vector z [ x ] {\displaystyle \mathbf {z} \in } applied to y [ x ] {\displaystyle \mathbf {y} \in } , gives.

f ( z ) f ( y ) + [ J f ] ( [ x ] ) ( z y ) {\displaystyle f(\mathbf {z} )\in f(\mathbf {y} )+(\mathbf {} )\cdot (\mathbf {z} -\mathbf {y} )} .

For a zero z {\displaystyle \mathbf {z} } , that is f ( z ) = 0 {\displaystyle f(z)=0} , and thus, must satisfy.

f ( y ) + [ J f ] ( [ x ] ) ( z y ) = 0 {\displaystyle f(\mathbf {y} )+(\mathbf {} )\cdot (\mathbf {z} -\mathbf {y} )=0} .

This is equivalent to z y [ J f ] ( [ x ] ) 1 f ( y ) {\displaystyle \mathbf {z} \in \mathbf {y} -(\mathbf {} )^{-1}\cdot f(\mathbf {y} )} . An outer estimate of [ J f ] ( [ x ] ) 1 f ( y ) ) {\displaystyle (\mathbf {} )^{-1}\cdot f(\mathbf {y} ))} can be determined using linear methods.

In each step of the interval Newton method, an approximate starting value [ x ] [ R ] n {\displaystyle \in ^{n}} is replaced by [ x ] ( y [ J f ] ( [ x ] ) 1 f ( y ) ) {\displaystyle \cap \left(\mathbf {y} -(\mathbf {} )^{-1}\cdot f(\mathbf {y} )\right)} and so the result can be improved. In contrast to traditional methods, the interval method approaches the result by containing the zeros. This guarantees that the result produces all zeros in the initial range. Conversely, it proves that no zeros of f {\displaystyle f} were in the initial range [ x ] {\displaystyle } if a Newton step produces the empty set.

The method converges on all zeros in the starting region. Division by zero can lead to the separation of distinct zeros, though the separation may not be complete; it can be complemented by the bisection method.

As an example, consider the function f ( x ) = x 2 2 {\displaystyle f(x)=x^{2}-2} , the starting range [ x ] = [ 2 , 2 ] {\displaystyle =} , and the point y = 0 {\displaystyle y=0} . We then have J f ( x ) = 2 x {\displaystyle J_{f}(x)=2\,x} and the first Newton step gives.

[ 2 , 2 ] ( 0 1 2 [ 2 , 2 ] ( 0 2 ) ) = [ 2 , 2 ] ( [ , 0.5 ] [ 0.5 , ] ) = [ 2 , 0.5 ] [ 0.5 , 2 ] {\displaystyle \cap \left(0-{\frac {1}{2\cdot }}(0-2)\right)=\cap {\Big (}\cup {\Big )}=\cup } .

More Newton steps are used separately on x [ 2 , 0.5 ] {\displaystyle x\in } and [ 0.5 , 2 ] {\displaystyle } . These converge to arbitrarily small intervals around 2 {\displaystyle -{\sqrt {2}}} and + 2 {\displaystyle +{\sqrt {2}}} .

The Interval Newton method can also be used with thick functions such as g ( x ) = x 2 [ 2 , 3 ] {\displaystyle g(x)=x^{2}-} , which would in any case have interval results. The result then produces intervals containing [ 3 , 2 ] [ 2 , 3 ] {\displaystyle \left\cup \left} .

Bisection and covers

Rough estimate (turquoise) and improved estimates through "mincing" (red)

The various interval methods deliver conservative results as dependencies between the sizes of different interval extensions are not taken into account. However, the dependency problem becomes less significant for narrower intervals.

Covering an interval vector [ x ] {\displaystyle } by smaller boxes [ x 1 ] , , [ x k ] , {\displaystyle ,\ldots ,,} so that

[ x ] = i = 1 k [ x i ] , {\displaystyle =\bigcup _{i=1}^{k},}

is then valid for the range of values.

f ( [ x ] ) = i = 1 k f ( [ x i ] ) . {\displaystyle f()=\bigcup _{i=1}^{k}f().}

So, for the interval extensions described above the following holds:

[ f ] ( [ x ] ) i = 1 k [ f ] ( [ x i ] ) . {\displaystyle ()\supseteq \bigcup _{i=1}^{k}().}

Since [ f ] ( [ x ] ) {\displaystyle ()} is often a genuine superset of the right-hand side, this usually leads to an improved estimate.

Such a cover can be generated by the bisection method such as thick elements [ x i 1 , x i 2 ] {\displaystyle } of the interval vector [ x ] = ( [ x 11 , x 12 ] , , [ x n 1 , x n 2 ] ) {\displaystyle =(,\ldots ,)} by splitting in the center into the two intervals [ x i 1 , 1 2 ( x i 1 + x i 2 ) ] {\displaystyle \left} and [ 1 2 ( x i 1 + x i 2 ) , x i 2 ] . {\displaystyle \left.} If the result is still not suitable then further gradual subdivision is possible. A cover of 2 r {\displaystyle 2^{r}} intervals results from r {\displaystyle r} divisions of vector elements, substantially increasing the computation costs.

With very wide intervals, it can be helpful to split all intervals into several subintervals with a constant (and smaller) width, a method known as mincing. This then avoids the calculations for intermediate bisection steps. Both methods are only suitable for problems of low dimension.

Application

Interval arithmetic can be used in various areas (such as set inversion, motion planning, set estimation, or stability analysis) to treat estimates with no exact numerical value.

Rounding error analysis

Interval arithmetic is used with error analysis, to control rounding errors arising from each calculation. The advantage of interval arithmetic is that after each operation there is an interval that reliably includes the true result. The distance between the interval boundaries gives the current calculation of rounding errors directly:

Error = a b s ( a b ) {\displaystyle \mathrm {abs} (a-b)} for a given interval [ a , b ] {\displaystyle } .

Interval analysis adds to rather than substituting for traditional methods for error reduction, such as pivoting.

Tolerance analysis

Parameters for which no exact figures can be allocated often arise during the simulation of technical and physical processes. The production process of technical components allows certain tolerances, so some parameters fluctuate within intervals. In addition, many fundamental constants are not known precisely.

If the behavior of such a system affected by tolerances satisfies, for example, f ( x , p ) = 0 {\displaystyle f(\mathbf {x} ,\mathbf {p} )=0} , for p [ p ] {\displaystyle \mathbf {p} \in } and unknown x {\displaystyle \mathbf {x} } then the set of possible solutions.

{ x | p [ p ] , f ( x , p ) = 0 } {\displaystyle \{\mathbf {x} \,|\,\exists \mathbf {p} \in ,f(\mathbf {x} ,\mathbf {p} )=0\}} ,

can be found by interval methods. This provides an alternative to traditional propagation of error analysis. Unlike point methods, such as Monte Carlo simulation, interval arithmetic methodology ensures that no part of the solution area can be overlooked. However, the result is always a worst-case analysis for the distribution of error, as other probability-based distributions are not considered.

Fuzzy interval arithmetic

Approximation of the normal distribution by a sequence of intervals

Interval arithmetic can also be used with affiliation functions for fuzzy quantities as they are used in fuzzy logic. Apart from the strict statements x [ x ] {\displaystyle x\in } and x [ x ] {\displaystyle x\not \in } , intermediate values are also possible, to which real numbers μ [ 0 , 1 ] {\displaystyle \mu \in } are assigned. μ = 1 {\displaystyle \mu =1} corresponds to definite membership while μ = 0 {\displaystyle \mu =0} is non-membership. A distribution function assigns uncertainty, which can be understood as a further interval.

For fuzzy arithmetic only a finite number of discrete affiliation stages μ i [ 0 , 1 ] {\displaystyle \mu _{i}\in } are considered. The form of such a distribution for an indistinct value can then be represented by a sequence of intervals.

[ x ( 1 ) ] [ x ( 2 ) ] [ x ( k ) ] . {\displaystyle \left\supset \left\supset \cdots \supset \left.}

The interval [ x ( i ) ] {\displaystyle \left} corresponds exactly to the fluctuation range for the stage μ i . {\displaystyle \mu _{i}.}

The appropriate distribution for a function f ( x 1 , , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} concerning indistinct values x 1 , , x n {\displaystyle x_{1},\ldots ,x_{n}} and the corresponding sequences.

[ x 1 ( 1 ) ] [ x 1 ( k ) ] , , [ x n ( 1 ) ] [ x n ( k ) ] {\displaystyle \left\supset \cdots \supset \left,\ldots ,\left\supset \cdots \supset \left}

can be approximated by the sequence.

[ y ( 1 ) ] [ y ( k ) ] , {\displaystyle \left\supset \cdots \supset \left,}

where

[ y ( i ) ] = f ( [ x 1 ( i ) ] , [ x n ( i ) ] ) {\displaystyle \left=f\left(\left,\ldots \left\right)}

and can be calculated by interval methods. The value [ y ( 1 ) ] {\displaystyle \left} corresponds to the result of an interval calculation.

Computer-assisted proof

Warwick Tucker used interval arithmetic in order to solve the 14th of Smale's problems, that is, to show that the Lorenz attractor is a strange attractor. Thomas Hales used interval arithmetic in order to solve the Kepler conjecture.

History

Interval arithmetic is not a completely new phenomenon in mathematics; it has appeared several times under different names in the course of history. For example, Archimedes calculated lower and upper bounds 223/71 < π < 22/7 in the 3rd century BC. Actual calculation with intervals has neither been as popular as other numerical techniques nor been completely forgotten.

Rules for calculating with intervals and other subsets of the real numbers were published in a 1931 work by Rosalind Cicely Young. Arithmetic work on range numbers to improve the reliability of digital systems was then published in a 1951 textbook on linear algebra by Paul S. Dwyer [de]; intervals were used to measure rounding errors associated with floating-point numbers. A comprehensive paper on interval algebra in numerical analysis was published by Teruo Sunaga (1958).

The birth of modern interval arithmetic was marked by the appearance of the book Interval Analysis by Ramon E. Moore in 1966. He had the idea in spring 1958, and a year later he published an article about computer interval arithmetic. Its merit was that starting with a simple principle, it provided a general method for automated error analysis, not just errors resulting from rounding.

Independently in 1956, Mieczyslaw Warmus suggested formulae for calculations with intervals, though Moore found the first non-trivial applications.

In the following twenty years, German groups of researchers carried out pioneering work around Ulrich W. Kulisch and Götz Alefeld [de] at the University of Karlsruhe and later also at the Bergische University of Wuppertal. For example, Karl Nickel [de] explored more effective implementations, while improved containment procedures for the solution set of systems of equations were due to Arnold Neumaier among others. In the 1960s, Eldon R. Hansen dealt with interval extensions for linear equations and then provided crucial contributions to global optimization, including what is now known as Hansen's method, perhaps the most widely used interval algorithm. Classical methods in this often have the problem of determining the largest (or smallest) global value, but could only find a local optimum and could not find better values; Helmut Ratschek and Jon George Rokne developed branch and bound methods, which until then had only applied to integer values, by using intervals to provide applications for continuous values.

In 1988, Rudolf Lohner developed Fortran-based software for reliable solutions for initial value problems using ordinary differential equations.

The journal Reliable Computing (originally Interval Computations) has been published since the 1990s, dedicated to the reliability of computer-aided computations. As lead editor, R. Baker Kearfott, in addition to his work on global optimization, has contributed significantly to the unification of notation and terminology used in interval arithmetic.

In recent years work has concentrated in particular on the estimation of preimages of parameterized functions and to robust control theory by the COPRIN working group of INRIA in Sophia Antipolis in France.

Implementations

There are many software packages that permit the development of numerical applications using interval arithmetic. These are usually provided in the form of program libraries. There are also C++ and Fortran compilers that handle interval data types and suitable operations as a language extension, so interval arithmetic is supported directly.

Since 1967, Extensions for Scientific Computation (XSC) have been developed in the University of Karlsruhe for various programming languages, such as C++, Fortran, and Pascal. The first platform was a Zuse Z23, for which a new interval data type with appropriate elementary operators was made available. There followed in 1976, Pascal-SC, a Pascal variant on a Zilog Z80 that it made possible to create fast, complicated routines for automated result verification. Then came the Fortran 77-based ACRITH-XSC for the System/370 architecture (FORTRAN-SC), which was later delivered by IBM. Starting from 1991 one could produce code for C compilers with Pascal-XSC; a year later the C++ class library supported C-XSC on many different computer systems. In 1997, all XSC variants were made available under the GNU General Public License. At the beginning of 2000, C-XSC 2.0 was released under the leadership of the working group for scientific computation at the Bergische University of Wuppertal to correspond to the improved C++ standard.

Another C++-class library was created in 1993 at the Hamburg University of Technology called Profil/BIAS (Programmer's Runtime Optimized Fast Interval Library, Basic Interval Arithmetic), which made the usual interval operations more user-friendly. It emphasized the efficient use of hardware, portability, and independence of a particular presentation of intervals.

The Boost collection of C++ libraries contains a template class for intervals. Its authors are aiming to have interval arithmetic in the standard C++ language.

The Frink programming language has an implementation of interval arithmetic that handles arbitrary-precision numbers. Programs written in Frink can use intervals without rewriting or recompilation.

GAOL is another C++ interval arithmetic library that is unique in that it offers the relational interval operators used in interval constraint programming.

The Moore library is an efficient implementation of interval arithmetic in C++. It provides intervals with endpoints of arbitrary precision and is based on the concepts feature of C++.

The Julia programming language has an implementation of interval arithmetics along with high-level features, such as root-finding (for both real and complex-valued functions) and interval constraint programming, via the ValidatedNumerics.jl package.

In addition, computer algebra systems, such as Euler Mathematical Toolbox, FriCAS, Maple, Mathematica, Maxima and MuPAD, can handle intervals. A Matlab extension Intlab builds on BLAS routines, and the toolbox b4m makes a Profil/BIAS interface.

A library for the functional language OCaml was written in assembly language and C.

IEEE 1788 standard

A standard for interval arithmetic, IEEE Std 1788-2015, has been approved in June 2015. Two reference implementations are freely available. These have been developed by members of the standard's working group: The libieeep1788 library for C++, and the interval package for GNU Octave.

A minimal subset of the standard, IEEE Std 1788.1-2017, has been approved in December 2017 and published in February 2018. It should be easier to implement and may speed production of implementations.

Conferences and workshops

Several international conferences or workshops take place every year in the world. The main conference is probably SCAN (International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation), but there is also SWIM (Small Workshop on Interval Methods), PPAM (International Conference on Parallel Processing and Applied Mathematics), REC (International Workshop on Reliable Engineering Computing).

See also

References

  1. ^ Dreyer, Alexander (2003). Interval Analysis of Analog Circuits with Component Tolerances. Aachen, Germany: Shaker Verlag. p. 15. ISBN 3-8322-4555-3.
  2. Complex interval arithmetic and its applications, Miodrag S. Petković, Ljiljana D. Petković, Wiley-VCH, 1998, ISBN 978-3-527-40134-5
  3. ^ Hend Dawood (2011). Theories of Interval Arithmetic: Mathematical Foundations and Applications. Saarbrücken: LAP LAMBERT Academic Publishing. ISBN 978-3-8465-0154-2.
  4. "Jiri Rohn, List of publications". Archived from the original on 2008-11-23. Retrieved 2008-05-26.
  5. ^ Walster, G. William; Hansen, Eldon Robert (2004). Global Optimization using Interval Analysis (2nd ed.). New York, USA: Marcel Dekker. ISBN 0-8247-4059-9.
  6. Jaulin, Luc; Kieffer, Michel; Didrit, Olivier; Walter, Eric (2001). Applied Interval Analysis. Berlin: Springer. ISBN 1-85233-219-0.
  7. Application of Fuzzy Arithmetic to Quantifying the Effects of Uncertain Model Parameters, Michael Hanss, University of Stuttgart
  8. Tucker, Warwick (1999). The Lorenz attractor exists. Comptes Rendus de l'Académie des Sciences-Series I-Mathematics, 328(12), 1197-1202.
  9. Young, Rosalind Cicely (1931). The algebra of many-valued quantities. Mathematische Annalen, 104(1), 260-290. (NB. A doctoral candidate at the University of Cambridge.)
  10. Dwyer, Paul Sumner (1951). Linear computations. Oxford, England: Wiley. (University of Michigan)
  11. Sunaga, Teruo (1958). "Theory of interval algebra and its application to numerical analysis". RAAG Memoirs (2): 29–46.
  12. Moore, Ramon Edgar (1966). Interval Analysis. Englewood Cliff, New Jersey, USA: Prentice-Hall. ISBN 0-13-476853-1.
  13. Cloud, Michael J.; Moore, Ramon Edgar; Kearfott, R. Baker (2009). Introduction to Interval Analysis. Philadelphia: Society for Industrial and Applied Mathematics (SIAM). ISBN 978-0-89871-669-6.
  14. Hansen, Eldon Robert (2001-08-13). "Publications Related to Early Interval Work of R. E. Moore". University of Louisiana at Lafayette Press. Retrieved 2015-06-29.
  15. Precursory papers on interval analysis by Mieczyslaw Warmus Archived 2008-04-18 at the Wayback Machine
  16. Kulisch, Ulrich W. (1989). Wissenschaftliches Rechnen mit Ergebnisverifikation. Eine Einführung (in German). Wiesbaden: Vieweg-Verlag. ISBN 3-528-08943-1.
  17. Kulisch, Ulrich W. (1969). "Grundzüge der Intervallrechnung". In Laugwitz, Detlef (ed.). Jahrbuch Überblicke Mathematik (in German). Vol. 2. Mannheim, Germany: Bibliographisches Institut. pp. 51–98.
  18. Alefeld, Götz ; Herzberger, Jürgen (1974). Einführung in die Intervallrechnung. Reihe Informatik (in German). Vol. 12. Mannheim, Wien, Zürich: B.I.-Wissenschaftsverlag. ISBN 3-411-01466-0.
  19. Bounds for ordinary differential equations of Rudolf Lohner Archived 11 May 2018 at the Wayback Machine (in German)
  20. Bibliography of R. Baker Kearfott, University of Louisiana at Lafayette
  21. Introductory Film (mpeg) of the COPRIN teams of INRIA, Sophia Antipolis
  22. Software for Interval Computations Archived 2006-03-02 at the Wayback Machine collected by Vladik Kreinovich, University of Texas at El Paso.
  23. History of XSC-Languages Archived 2007-09-29 at the Wayback Machine
  24. A Proposal to add Interval Arithmetic to the C++ Standard Library
  25. Gaol is Not Just Another Interval Arithmetic Library
  26. Moore: Interval Arithmetic in Modern C++
  27. The Julia programming language
  28. ValidatedNumerics.jl
  29. Interval Arithmetic for Maxima: A Brief Summary by Richard J. Fateman.]
  30. ^ "Intlab INTerval LABoratory". Archived from the original on 2020-01-30. Retrieved 2012-11-07.
  31. b4m
  32. Alliot, Jean-Marc; Gotteland, Jean-Baptiste; Vanaret, Charlie; Durand, Nicolas; Gianazza, David (2012). Implementing an interval computation library for OCaml on x86/amd64 architectures. 17th ACM SIGPLAN International Conference on Functional Programming.
  33. IEEE Standard for Interval Arithmetic
  34. Nathalie Revol (2015). The (near-)future IEEE 1788 standard for interval arithmetic, slides // SWIM 2015: 8th Small Workshop in Interval Methods. Prague, 9–11 June 2015
  35. C++ implementation of the preliminary IEEE P1788 standard for interval arithmetic
  36. GNU Octave interval package
  37. "IEEE Std 1788.1-2017 - IEEE Standard for Interval Arithmetic (Simplified)". IEEE Standard. IEEE Standards Association. 2017. Archived from the original on 2018-02-07. Retrieved 2018-02-06.

Further reading

External links

Data types
Uninterpreted
Numeric
Pointer
Text
Composite
Other
Related
topics
Categories: