Revision as of 14:40, 7 April 2005 editCharles Matthews (talk | contribs)Autopatrolled, Administrators360,292 editsmNo edit summary← Previous edit | Revision as of 03:48, 14 April 2005 edit undoT0ny (talk | contribs)206 editsmNo edit summaryNext edit → | ||
Line 5: | Line 5: | ||
] | ] | ||
{{Stub| computers}} |
Revision as of 03:48, 14 April 2005
In Google's MapReduce programming model, parallel computations over large data sets are implemented by specifying a Map function that maps key-value pairs to new key-value pairs and a subsequent Reduce function that consolidates all mapped key-value pairs sharing the same keys to single key-value pairs.
References
- Dean, Jeffrey & Ghemawat, Sanjay (2004). "MapReduce: Simplified Data Processing on Large Clusters". Retrieved Apr. 6, 2005.
This article is a stub. You can help Misplaced Pages by expanding it. |