This is an old revision of this page, as edited by Alexwg (talk | contribs) at 00:56, 7 April 2005. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 00:56, 7 April 2005 by Alexwg (talk | contribs)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)In Google's MapReduce programming model, parallel computations over large data sets are implemented by specifying a Map function that maps key-value pairs to new key-value pairs and a subsequent Reduce function that consolidates all mapped key-value pairs sharing the same keys to single key-value pairs.
References
- Dean, Jeffrey & Ghemawat, Sanjay (2004). "MapReduce: Simplified Data Processing on Large Clusters". Retrieved Apr. 6, 2005.