Misplaced Pages

Reduction operator

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Reduce (parallel pattern)) Computer science concept

In computer science, the reduction operator is a type of operator that is commonly used in parallel programming to reduce the elements of an array into a single result. Reduction operators are associative and often (but not necessarily) commutative. The reduction of sets of elements is an integral part of programming models such as Map Reduce, where a reduction operator is applied (mapped) to all elements before they are reduced. Other parallel algorithms use reduction operators as primary operations to solve more complex problems. Many reduction operators can be used for broadcasting to distribute data to all processors.

Theory

A reduction operator can help break down a task into various partial tasks by calculating partial results which can be used to obtain a final result. It allows certain serial operations to be performed in parallel and the number of steps required for those operations to be reduced. A reduction operator stores the result of the partial tasks into a private copy of the variable. These private copies are then merged into a shared copy at the end.

An operator is a reduction operator if:

  • It can reduce an array to a single scalar value.
  • The final result should be obtainable from the results of the partial tasks that were created.

These two requirements are satisfied for commutative and associative operators that are applied to all array elements.

Some operators which satisfy these requirements are addition, multiplication, and some logical operators (and, or, etc.).

A reduction operator {\displaystyle \oplus } can be applied in constant time on an input set V = { v 0 = ( e 0 0 e 0 m 1 ) , v 1 = ( e 1 0 e 1 m 1 ) , , v p 1 = ( e p 1 0 e p 1 m 1 ) } {\displaystyle V=\left\{v_{0}={\begin{pmatrix}e_{0}^{0}\\\vdots \\e_{0}^{m-1}\end{pmatrix}},v_{1}={\begin{pmatrix}e_{1}^{0}\\\vdots \\e_{1}^{m-1}\end{pmatrix}},\dots ,v_{p-1}={\begin{pmatrix}e_{p-1}^{0}\\\vdots \\e_{p-1}^{m-1}\end{pmatrix}}\right\}} of p {\displaystyle p} vectors with m {\displaystyle m} elements each. The result r {\displaystyle r} of the operation is the combination of the elements r = ( e 0 0 e 1 0 e p 1 0 e 0 m 1 e 1 m 1 e p 1 m 1 ) = ( i = 0 p 1 e i 0 i = 0 p 1 e i m 1 ) {\displaystyle r={\begin{pmatrix}e_{0}^{0}\oplus e_{1}^{0}\oplus \dots \oplus e_{p-1}^{0}\\\vdots \\e_{0}^{m-1}\oplus e_{1}^{m-1}\oplus \dots \oplus e_{p-1}^{m-1}\end{pmatrix}}={\begin{pmatrix}\bigoplus _{i=0}^{p-1}e_{i}^{0}\\\vdots \\\bigoplus _{i=0}^{p-1}e_{i}^{m-1}\end{pmatrix}}} and has to be stored at a specified root processor at the end of the execution. If the result r {\displaystyle r} has to be available at every processor after the computation has finished, it is often called Allreduce. An optimal sequential linear-time algorithm for reduction can apply the operator successively from front to back, always replacing two vectors with the result of the operation applied to all its elements, thus creating an instance that has one vector less. It needs ( p 1 ) m {\displaystyle (p-1)\cdot m} steps until only r {\displaystyle r} is left. Sequential algorithms can not perform better than linear time, but parallel algorithms leave some space left to optimize.

Example

Suppose we have an array [ 2 , 3 , 5 , 1 , 7 , 6 , 8 , 4 ] {\displaystyle } . The sum of this array can be computed serially by sequentially reducing the array into a single sum using the '+' operator. Starting the summation from the beginning of the array yields: ( ( ( ( ( ( 2 + 3 ) + 5 ) + 1 ) + 7 ) + 6 ) + 8 ) + 4 = 36. {\displaystyle {\Bigg (}{\bigg (}{\Big (}{\big (}\,(\,(2+3)+5)+1{\big )}+7{\Big )}+6{\bigg )}+8{\Bigg )}+4=36.} Since '+' is both commutative and associative, it is a reduction operator. Therefore this reduction can be performed in parallel using several cores, where each core computes the sum of a subset of the array, and the reduction operator merges the results. Using a binary tree reduction would allow 4 cores to compute ( 2 + 3 ) {\textstyle (2+3)} , ( 5 + 1 ) {\textstyle (5+1)} , ( 7 + 6 ) {\textstyle (7+6)} , and ( 8 + 4 ) {\textstyle (8+4)} . Then two cores can compute ( 5 + 6 ) {\displaystyle (5+6)} and ( 13 + 12 ) {\displaystyle (13+12)} , and lastly a single core computes ( 11 + 25 ) = 36 {\displaystyle (11+25)=36} . So a total of 4 cores can be used to compute the sum in log 2 8 = 3 {\textstyle \log _{2}8=3} steps instead of the 7 {\displaystyle 7} steps required for the serial version. This parallel binary tree technique computes ( ( 2 + 3 ) + ( 5 + 1 ) ) + ( ( 7 + 6 ) + ( 8 + 4 ) ) {\textstyle {\big (}\,(2+3)+(5+1)\,{\big )}+{\big (}\,(7+6)+(8+4)\,{\big )}} . Of course the result is the same, but only because of the associativity of the reduction operator. The commutativity of the reduction operator would be important if there were a master core distributing work to several processors, since then the results could arrive back to the master processor in any order. The property of commutativity guarantees that the result will be the same.

IEEE 754-2019 defines 4 kinds of sum reductions and 3 kinds of scaled-product reductions. Because the operations are reduction operators, the standards specifies that "implementations may associate in any order or evaluate in any wider format."

Nonexample

Matrix multiplication is not a reduction operator since the operation is not commutative. If processes were allowed to return their matrix multiplication results in any order to the master process, the final result that the master computes will likely be incorrect if the results arrived out of order. However, note that matrix multiplication is associative, and therefore the result would be correct as long as the proper ordering were enforced, as in the binary tree reduction technique.

Algorithms

Binomial tree algorithms

Regarding parallel algorithms, there are two main models of parallel computation, the parallel random access machine (PRAM) as an extension of the RAM with shared memory between processing units and the bulk synchronous parallel computer which takes communication and synchronization into account. Both models have different implications for the time-complexity, therefore two algorithms will be shown.

PRAM-algorithm

This algorithm represents a widely spread method to handle inputs where p {\displaystyle p} is a power of two. The reverse procedure is often used for broadcasting elements.

Visualization of the algorithm with p = 8, m = 1, and addition as the reduction operator
for k 0 {\displaystyle k\gets 0} to log 2 p 1 {\displaystyle \lceil \log _{2}p\rceil -1} do
for i 0 {\displaystyle i\gets 0} to p 1 {\displaystyle p-1} do in parallel
if p i {\displaystyle p_{i}} is active then
if bit k {\displaystyle k} of i {\displaystyle i} is set then
set p i {\displaystyle p_{i}} to inactive
else if i + 2 k < p {\displaystyle i+2^{k}<p}
x i x i x i + 2 k {\displaystyle x_{i}\gets x_{i}\oplus ^{\star }x_{i+2^{k}}}

The binary operator for vectors is defined element-wise such that ( e i 0 e i m 1 ) ( e j 0 e j m 1 ) = ( e i 0 e j 0 e i m 1 e j m 1 ) . {\displaystyle {\begin{pmatrix}e_{i}^{0}\\\vdots \\e_{i}^{m-1}\end{pmatrix}}\oplus ^{\star }{\begin{pmatrix}e_{j}^{0}\\\vdots \\e_{j}^{m-1}\end{pmatrix}}={\begin{pmatrix}e_{i}^{0}\oplus e_{j}^{0}\\\vdots \\e_{i}^{m-1}\oplus e_{j}^{m-1}\end{pmatrix}}.}

The algorithm further assumes that in the beginning x i = v i {\displaystyle x_{i}=v_{i}} for all i {\displaystyle i} and p {\displaystyle p} is a power of two and uses the processing units p 0 , p 1 , p n 1 {\displaystyle p_{0},p_{1},\dots p_{n-1}} . In every iteration, half of the processing units become inactive and do not contribute to further computations. The figure shows a visualization of the algorithm using addition as the operator. Vertical lines represent the processing units where the computation of the elements on that line take place. The eight input elements are located on the bottom and every animation step corresponds to one parallel step in the execution of the algorithm. An active processor p i {\displaystyle p_{i}} evaluates the given operator on the element x i {\displaystyle x_{i}} it is currently holding and x j {\displaystyle x_{j}} where j {\displaystyle j} is the minimal index fulfilling j > i {\displaystyle j>i} , so that p j {\displaystyle p_{j}} is becoming an inactive processor in the current step. x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} are not necessarily elements of the input set X {\displaystyle X} as the fields are overwritten and reused for previously evaluated expressions. To coordinate the roles of the processing units in each step without causing additional communication between them, the fact that the processing units are indexed with numbers from 0 {\displaystyle 0} to p 1 {\displaystyle p-1} is used. Each processor looks at its k {\displaystyle k} -th least significant bit and decides whether to get inactive or compute the operator on its own element and the element with the index where the k {\displaystyle k} -th bit is not set. The underlying communication pattern of the algorithm is a binomial tree, hence the name of the algorithm.

Only p 0 {\displaystyle p_{0}} holds the result in the end, therefore it is the root processor. For an Allreduce operation the result has to be distributed, which can be done by appending a broadcast from p 0 {\displaystyle p_{0}} . Furthermore, the number p {\displaystyle p} of processors is restricted to be a power of two. This can be lifted by padding the number of processors to the next power of two. There are also algorithms that are more tailored for this use-case.

Runtime analysis

The main loop is executed log 2 p {\displaystyle \lceil \log _{2}p\rceil } times, the time needed for the part done in parallel is in O ( m ) {\displaystyle {\mathcal {O}}(m)} as a processing unit either combines two vectors or becomes inactive. Thus the parallel time T ( p , m ) {\displaystyle T(p,m)} for the PRAM is T ( p , m ) = O ( log ( p ) m ) {\displaystyle T(p,m)={\mathcal {O}}(\log(p)\cdot m)} . The strategy for handling read and write conflicts can be chosen as restrictive as an exclusive read and exclusive write (EREW). The speedup S ( p , m ) {\displaystyle S(p,m)} of the algorithm is S ( p , m ) O ( T seq T ( p , m ) ) = O ( p log ( p ) ) {\textstyle S(p,m)\in {\mathcal {O}}\left({\frac {T_{\text{seq}}}{T(p,m)}}\right)={\mathcal {O}}\left({\frac {p}{\log(p)}}\right)} and therefore the efficiency is E ( p , m ) O ( S ( p , m ) p ) = O ( 1 log ( p ) ) {\textstyle E(p,m)\in {\mathcal {O}}\left({\frac {S(p,m)}{p}}\right)={\mathcal {O}}\left({\frac {1}{\log(p)}}\right)} . The efficiency suffers because half of the active processing units become inactive after each step, so p 2 i {\displaystyle {\frac {p}{2^{i}}}} units are active in step i {\displaystyle i} .

Distributed memory algorithm

In contrast to the PRAM-algorithm, in the distributed memory model, memory is not shared between processing units and data has to be exchanged explicitly between processing units. Therefore, data has to be exchanged explicitly between units, as can be seen in the following algorithm.

for k 0 {\displaystyle k\gets 0} to log 2 p 1 {\displaystyle \lceil \log _{2}p\rceil -1} do
for i 0 {\displaystyle i\gets 0} to p 1 {\displaystyle p-1} do in parallel
if p i {\displaystyle p_{i}} is active then
if bit k {\displaystyle k} of i {\displaystyle i} is set then
send x i {\displaystyle x_{i}} to p i 2 k {\displaystyle p_{i-2^{k}}}
set p k {\displaystyle p_{k}} to inactive
else if i + 2 k < p {\displaystyle i+2^{k}<p}
receive x i + 2 k {\displaystyle x_{i+2^{k}}}
x i x i x i + 2 k {\displaystyle x_{i}\gets x_{i}\oplus ^{\star }x_{i+2^{k}}}

The only difference between the distributed algorithm and the PRAM version is the inclusion of explicit communication primitives, the operating principle stays the same.

Runtime analysis

The communication between units leads to some overhead. A simple analysis for the algorithm uses the BSP-model and incorporates the time T start {\displaystyle T_{\text{start}}} needed to initiate communication and T byte {\displaystyle T_{\text{byte}}} the time needed to send a byte. Then the resulting runtime is Θ ( ( T start + n T byte ) l o g ( p ) ) {\displaystyle \Theta ((T_{\text{start}}+n\cdot T_{\text{byte}})\cdot log(p))} , as m {\displaystyle m} elements of a vector are sent in each iteration and have size n {\displaystyle n} in total.

Pipeline-algorithm

Visualization of the pipeline-algorithm with p = 5, m = 4 and addition as the reduction operator.

For distributed memory models, it can make sense to use pipelined communication. This is especially the case when T start {\displaystyle T_{\text{start}}} is small in comparison to T byte {\displaystyle T_{\text{byte}}} . Usually, linear pipelines split data or a tasks into smaller pieces and process them in stages. In contrast to the binomial tree algorithms, the pipelined algorithm uses the fact that the vectors are not inseparable, but the operator can be evaluated for single elements:

for k 0 {\displaystyle k\gets 0} to p + m 3 {\displaystyle p+m-3} do
for i 0 {\displaystyle i\gets 0} to p 1 {\displaystyle p-1} do in parallel
if i k < i + m i p 1 {\displaystyle i\leq k<i+m\land i\neq p-1}
send x i k i {\displaystyle x_{i}^{k-i}} to p i + 1 {\displaystyle p_{i+1}}
if i 1 k < i 1 + m i 0 {\displaystyle i-1\leq k<i-1+m\land i\neq 0}
receive x i 1 k + i 1 {\displaystyle x_{i-1}^{k+i-1}} from p i 1 {\displaystyle p_{i-1}}
x i k + i 1 x i k + i 1 x i 1 k + i 1 {\displaystyle x_{i}^{k+i-1}\gets x_{i}^{k+i-1}\oplus x_{i-1}^{k+i-1}}

It is important to note that the send and receive operations have to be executed concurrently for the algorithm to work. The result vector is stored at p p 1 {\displaystyle p_{p-1}} at the end. The associated animation shows an execution of the algorithm on vectors of size four with five processing units. Two steps of the animation visualize one parallel execution step.

Runtime analysis

The number of steps in the parallel execution are p + m 2 {\displaystyle p+m-2} , it takes p 1 {\displaystyle p-1} steps until the last processing unit receives its first element and additional m 1 {\displaystyle m-1} until all elements are received. Therefore, the runtime in the BSP-model is T ( n , p , m ) = ( T start + n m T byte ) ( p + m 2 ) {\textstyle T(n,p,m)=\left(T_{\text{start}}+{\frac {n}{m}}\cdot T_{\text{byte}}\right)(p+m-2)} , assuming that n {\displaystyle n} is the total byte-size of a vector.

Although m {\displaystyle m} has a fixed value, it is possible to logically group elements of a vector together and reduce m {\displaystyle m} . For example, a problem instance with vectors of size four can be handled by splitting the vectors into the first two and last two elements, which are always transmitted and computed together. In this case, double the volume is sent each step, but the number of steps has roughly halved. It means that the parameter m {\displaystyle m} is halved, while the total byte-size n {\displaystyle n} stays the same. The runtime T ( p ) {\displaystyle T(p)} for this approach depends on the value of m {\displaystyle m} , which can be optimized if T start {\displaystyle T_{\text{start}}} and T byte {\textstyle T_{\text{byte}}} are known. It is optimal for m = n ( p 2 ) T byte T start {\textstyle m={\sqrt {\frac {n\cdot (p-2)\cdot T_{\text{byte}}}{T_{\text{start}}}}}} , assuming that this results in a smaller m {\displaystyle m} that divides the original one.

Applications

Reduction is one of the main collective operations implemented in the Message Passing Interface, where performance of the used algorithm is important and evaluated constantly for different use cases. Operators can be used as parameters for MPI_Reduce and MPI_Allreduce, with the difference that the result is available at one (root) processing unit or all of them.

OpenMP offers a reduction clause for describing how the results from parallel operations are collected together.

MapReduce relies heavily on efficient reduction algorithms to process big data sets, even on huge clusters.

Some parallel sorting algorithms use reductions to be able to handle very big data sets.

See also

References

  1. "Reduction Clause". www.dartmouth.edu. Dartmouth College. 23 March 2009. Retrieved 26 September 2016.
  2. ^ Solihin, Yan (2016). Fundamentals of Parallel Multicore Architecture. CRC Press. p. 75. ISBN 978-1-4822-1118-4.
  3. Chandra, Rohit (2001). Parallel Programming in OpenMP. Morgan Kaufmann. pp. 59–77. ISBN 1558606718.
  4. Cole, Murray (2004). "Bringing skeletons out of the closet: a pragmatic manifesto for skeletal parallel programming" (PDF). Parallel Computing. 30 (3): 393. doi:10.1016/j.parco.2003.12.002. hdl:20.500.11820/8eb79d42-de83-4cfb-9faa-30d9ac3b3839.
  5. IEEE Computer Society (22 July 2019). "9.4 Reduction operations". IEEE Standard for Floating-Point Arithmetic. IEEE STD 754-2019. IEEE. pp. 1–84. doi:10.1109/IEEESTD.2019.8766229. ISBN 978-1-5044-5924-2. IEEE Std 754-2019.
  6. Bar-Noy, Amotz; Kipnis, Shlomo (1994). "Broadcasting multiple messages in simultaneous send/receive systems". Discrete Applied Mathematics. 55 (2): 95–105. doi:10.1016/0166-218x(94)90001-9.
  7. Santos, Eunice E. (2002). "Optimal and Efficient Algorithms for Summing and Prefix Summing on Parallel Machines". Journal of Parallel and Distributed Computing. 62 (4): 517–543. doi:10.1006/jpdc.2000.1698.
  8. Slater, P.; Cockayne, E.; Hedetniemi, S. (1981-11-01). "Information Dissemination in Trees". SIAM Journal on Computing. 10 (4): 692–701. doi:10.1137/0210052. ISSN 0097-5397.
  9. Rabenseifner, Rolf; Träff, Jesper Larsson (2004-09-19). "More Efficient Reduction Algorithms for Non-Power-of-Two Number of Processors in Message-Passing Parallel Systems". Recent Advances in Parallel Virtual Machine and Message Passing Interface. Lecture Notes in Computer Science. Vol. 3241. Springer, Berlin, Heidelberg. pp. 36–46. doi:10.1007/978-3-540-30218-6_13. ISBN 9783540231639.
  10. Bar-Noy, A.; Kipnis, S. (1994-09-01). "Designing broadcasting algorithms in the postal model for message-passing systems". Mathematical Systems Theory. 27 (5): 431–452. CiteSeerX 10.1.1.54.2543. doi:10.1007/BF01184933. ISSN 0025-5661. S2CID 42798826.
  11. Pješivac-Grbović, Jelena; Angskun, Thara; Bosilca, George; Fagg, Graham E.; Gabriel, Edgar; Dongarra, Jack J. (2007-06-01). "Performance analysis of MPI collective operations". Cluster Computing. 10 (2): 127–143. CiteSeerX 10.1.1.80.3867. doi:10.1007/s10586-007-0012-0. ISSN 1386-7857. S2CID 2142998.
  12. "10.9. Reduction — OpenMP Application Programming Interface Examples". passlab.github.io.
  13. Lämmel, Ralf (2008). "Google's MapReduce programming model — Revisited". Science of Computer Programming. 70 (1): 1–30. doi:10.1016/j.scico.2007.07.001.
  14. Senger, Hermes; Gil-Costa, Veronica; Arantes, Luciana; Marcondes, Cesar A. C.; Marín, Mauricio; Sato, Liria M.; da Silva, Fabrício A.B. (2016-06-10). "BSP cost and scalability analysis for MapReduce operations". Concurrency and Computation: Practice and Experience. 28 (8): 2503–2527. doi:10.1002/cpe.3628. hdl:10533/147670. ISSN 1532-0634. S2CID 33645927.
  15. Axtmann, Michael; Bingmann, Timo; Sanders, Peter; Schulz, Christian (2014-10-24). "Practical Massively Parallel Sorting". arXiv:1410.6754 .
Category: