Misplaced Pages

Alpha max plus beta min algorithm

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
High-speed approximation of the square root of the sum of two squares Not to be confused with Minimax or Alpha–beta pruning.
The locus of points that give the same value in the algorithm, for different values of alpha and beta

The alpha max plus beta min algorithm is a high-speed approximation of the square root of the sum of two squares. The square root of the sum of two squares, also known as Pythagorean addition, is a useful function, because it finds the hypotenuse of a right triangle given the two side lengths, the norm of a 2-D vector, or the magnitude | z | = a 2 + b 2 {\displaystyle |z|={\sqrt {a^{2}+b^{2}}}} of a complex number z = a + bi given the real and imaginary parts.

The algorithm avoids performing the square and square-root operations, instead using simple operations such as comparison, multiplication, and addition. Some choices of the α and β parameters of the algorithm allow the multiplication operation to be reduced to a simple shift of binary digits that is particularly well suited to implementation in high-speed digital circuitry.

The approximation is expressed as | z | = α M a x + β M i n , {\displaystyle |z|=\alpha \,\mathbf {Max} +\beta \,\mathbf {Min} ,} where M a x {\displaystyle \mathbf {Max} } is the maximum absolute value of a and b, and M i n {\displaystyle \mathbf {Min} } is the minimum absolute value of a and b.

For the closest approximation, the optimum values for α {\displaystyle \alpha } and β {\displaystyle \beta } are α 0 = 2 cos π 8 1 + cos π 8 = 0.960433870103... {\displaystyle \alpha _{0}={\frac {2\cos {\frac {\pi }{8}}}{1+\cos {\frac {\pi }{8}}}}=0.960433870103...} and β 0 = 2 sin π 8 1 + cos π 8 = 0.397824734759... {\displaystyle \beta _{0}={\frac {2\sin {\frac {\pi }{8}}}{1+\cos {\frac {\pi }{8}}}}=0.397824734759...} , giving a maximum error of 3.96%.

α {\displaystyle \alpha \,\!} β {\displaystyle \beta \,\!} Largest error (%) Mean error (%)
1/1 1/2 11.80 8.68
1/1 1/4 11.61 3.20
1/1 3/8 6.80 4.25
7/8 7/16 12.50 4.91
15/16 15/32 6.25 3.08
α 0 {\displaystyle \alpha _{0}} β 0 {\displaystyle \beta _{0}} 3.96 2.41

Improvements

When α < 1 {\displaystyle \alpha <1} , | z | {\displaystyle |z|} becomes smaller than M a x {\displaystyle \mathbf {Max} } (which is geometrically impossible) near the axes where M i n {\displaystyle \mathbf {Min} } is near 0. This can be remedied by replacing the result with M a x {\displaystyle \mathbf {Max} } whenever that is greater, essentially splitting the line into two different segments.

| z | = max ( M a x , α M a x + β M i n ) . {\displaystyle |z|=\max(\mathbf {Max} ,\alpha \,\mathbf {Max} +\beta \,\mathbf {Min} ).}

Depending on the hardware, this improvement can be almost free.

Using this improvement changes which parameter values are optimal, because they no longer need a close match for the entire interval. A lower α {\displaystyle \alpha } and higher β {\displaystyle \beta } can therefore increase precision further.

Increasing precision: When splitting the line in two like this one could improve precision even more by replacing the first segment by a better estimate than M a x {\displaystyle \mathbf {Max} } , and adjust α {\displaystyle \alpha } and β {\displaystyle \beta } accordingly.

| z | = max ( | z 0 | , | z 1 | ) , {\displaystyle |z|=\max {\big (}|z_{0}|,|z_{1}|{\big )},}
| z 0 | = α 0 M a x + β 0 M i n , {\displaystyle |z_{0}|=\alpha _{0}\,\mathbf {Max} +\beta _{0}\,\mathbf {Min} ,}
| z 1 | = α 1 M a x + β 1 M i n . {\displaystyle |z_{1}|=\alpha _{1}\,\mathbf {Max} +\beta _{1}\,\mathbf {Min} .}
α 0 {\displaystyle \alpha _{0}} β 0 {\displaystyle \beta _{0}} α 1 {\displaystyle \alpha _{1}} β 1 {\displaystyle \beta _{1}} Largest error (%)
1 0 7/8 17/32 −2.65%
1 0 29/32 61/128 +2.4%
1 0 0.898204193266868 0.485968200201465 ±2.12%
1 1/8 7/8 33/64 −1.7%
1 5/32 27/32 71/128 1.22%
127/128 3/16 27/32 71/128 −1.13%

Beware however, that a non-zero β 0 {\displaystyle \beta _{0}} would require at least one extra addition and some bit-shifts (or a multiplication), probably nearly doubling the cost and, depending on the hardware, possibly defeat the purpose of using an approximation in the first place.

See also

  • Hypot, a precise function or algorithm that is also safe against overflow and underflow.

References

  1. Assim, Ara Abdulsatar Assim (2021). "ASIC implementation of high-speed vector magnitude & arctangent approximator". Computing, Telecommunication and Control. 71 (4): 7–14. doi:10.18721/JCSTCS.14401.

External links

Categories: