Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

MapReduce Design Patterns

DZone's Guide to

MapReduce Design Patterns

This article covers some MapReduce design patterns and uses real-world scenarios to help you determine when to use each one.

· Big Data Zone
Free Resource

Learn best practices according to DataOps. Download the free O'Reilly eBook on building a modern Big Data platform.

This article is featured in the new DZone Guide to  Big Data Processing, Volume III. Get your free copy for more insightful articles, industry statistics, and more. 

This article discusses four primary MapReduce design patterns:

1. Input-Map-Reduce-Output
2. Input-Map-Output
3. Input-Multiple Maps-Reduce-Output 4. Input-Map-Combiner-Reduce-Output

Following are some real-world scenarios, to help you understand when to use which design pattern.

Input-Map-Reduce-Output

Image title

If we want to perform an aggregation operation, this pattern is used:

Image title

Image title

To count the total salary by gender, we need to make the key Gender and the value Salary. The output for the Map function is:

Image title

Intermediate splitting gives the input for the Reduce function:

Image title

And the Reduce function output is:

Image title

Input-Map-Output

Image title

The Reduce function is mostly used for aggregation and calculation. However, if we only want to change the format of the data, then the Input-Map-Output pattern is used:

Image title

Image title

In the Input-Multiple Maps-Reduce-Output design pattern, our input is taken from two files, each of which has a different schema. (Note that if two or more files have the same schema, then there is no need for two mappers. We can simply write the same logic in one mapper class and provide multiple input files.)

Image title

This pattern is also used in Reduce-Side Join:

Image title

Input-Map-Combiner-Reduce-Output

Image title

Apache Spark is highly effective for big and small data processing tasks not because it best reinvents the wheel, but because it best amplifies the existing tools needed to perform effective analysis. Coupled with its highly scalable nature on commodity grade hardware, and incredible performance capabilities compared to other well known Big Data processing engines, Spark may finally let software finish eating the world.

A Combiner, also known as a semi-reducer, is an optional class that operates by accepting the inputs from the Map class and then passing the output key-value pairs to the Reducer class. The purpose of the Combiner function is to reduce the workload of Reducer.

In a MapReduce program, 20% of the work is done in the Map stage, which is also known as the data preparation stage. This stage does work in parallel.

80% of the work is done in the Reduce stage, which is known as the calculation stage. This work is not done in parallel, so it is slower than the Map phase. To reduce computation time, some work of the Reduce phase can be done in a Combiner phase.

Scenario

There are ve departments, and we have to calculate the total salary by department, then by gender. However, there are additional rules for calculating those totals. After calculating the total for each department by gender:

If the total department salary is greater than 200K, add 25K to the total.

If the total department salary is greater than 100K, add 10K to the total.

Image title

For more insights on machine learning, neural nets, data health, and more get your free copy of the new DZone Guide to Big Data Processing, Volume III!

Find the perfect platform for a scalable self-service model to manage Big Data workloads in the Cloud. Download the free O'Reilly eBook to learn more.

Topics:
big data ,mapreduce ,design patterns

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}