Parallelism with Map/Reduce
Join the DZone community and get the full member experience.
Join For FreeIn this article, we will explore the Map/Reduce approach to turn a sequential algorithm into parallel
Overview of Map/Reduce
Since the "reduce" operation needs to accumulate results for the whole job, as well as having a communication overhead in sending and collecting data, the Map/Reduce model is more suitable for long running, batch-oriented jobs.
In the Map/Reduce model, "parallelism" is achieved via a "split/sort/merge/join" process and is described as follows.
[img_assist|nid=3016|title=|desc=|link=none|align=left|width=500|height=399]
Overview of Map/Reduce
Since the "reduce" operation needs to accumulate results for the whole job, as well as having a communication overhead in sending and collecting data, the Map/Reduce model is more suitable for long running, batch-oriented jobs.
In the Map/Reduce model, "parallelism" is achieved via a "split/sort/merge/join" process and is described as follows.
- A MapReduce Job starts from a predefined set of Input data (usually sitting in some directory of a distributed file system). A master daemon (which is a central co-ordinator) is started and gets the job configuration.
- According to the job config, the master daemon will start multiple Mapper daemons as well as Reducer daemons in different machines. And then it starts the input reader to read data from some DFS directory. The input reader will chunk the read data accordingly and send them to a "randomly" chosen Mapper. This is the "split" phase and begins the parallelism.
- After getting the data chunks, the mapper daemon will run a "user-supplied map function" and produce a collection of (key, value) pairs. Each item within this collection will be sorted according to the key and then sent to the corresponding Reducer daemon. This is the "sort" phase.
- All items with the same key will come to the same Reducer daemon, which collects all the items of that key and invokes a "user-supplied reduce function" and produce a single entry (key, aggregatedValue) as a result. This is the "merge" phase.
- The output of reducer daemon will be collected by the Output writer, which is effective the "join" phase and ends the parallelism.
[img_assist|nid=3016|title=|desc=|link=none|align=left|width=500|height=399]
career
Data (computing)
Directory
master
Overhead (computing)
Machine
MapReduce
article writing
hadoop
Published at DZone with permission of Ricky Ho, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Logging Best Practices Revisited [Video]
-
Apache Kafka vs. Message Queue: Trade-Offs, Integration, Migration
-
Explainable AI: Making the Black Box Transparent
-
Build a Simple Chat Server With gRPC in .Net Core
Comments