Revision as of 00:53, 7 April 2005 editAlexwg (talk | contribs)56 edits New definition | Revision as of 00:56, 7 April 2005 edit undoAlexwg (talk | contribs)56 editsNo edit summaryNext edit → | ||
Line 1: | Line 1: | ||
In ] '''MapReduce''' programming model, parallel computations over large data sets are implemented by specifying a ''Map'' function that maps key-value pairs to new key-value pairs and a subsequent ''Reduce'' function that consolidates all mapped key-value pairs sharing the same keys to single key-value pairs. | In ] '''MapReduce''' programming model, parallel computations over large data sets are implemented by specifying a ''Map'' function that maps key-value pairs to new key-value pairs and a subsequent ''Reduce'' function that consolidates all mapped key-value pairs sharing the same keys to single key-value pairs. | ||
==References== | |||
* Dean, Jeffrey & Ghemawat, Sanjay (2004). . Retrieved Apr. 6, 2005. |
Revision as of 00:56, 7 April 2005
In Google's MapReduce programming model, parallel computations over large data sets are implemented by specifying a Map function that maps key-value pairs to new key-value pairs and a subsequent Reduce function that consolidates all mapped key-value pairs sharing the same keys to single key-value pairs.
References
- Dean, Jeffrey & Ghemawat, Sanjay (2004). "MapReduce: Simplified Data Processing on Large Clusters". Retrieved Apr. 6, 2005.