Over a million developers have joined DZone.

MapReduce Succinctly: An Instructional Implementation From Scratch of MapReduce Using Scala and Akka

DZone's Guide to

MapReduce Succinctly: An Instructional Implementation From Scratch of MapReduce Using Scala and Akka

We recently had the opportunity to present a talk on the MapReduce programming paradigm for the Raleigh-Durham chapter of "Girl Develop It" at Syncfusion’s office in Morrisville, NC. The primary goal was to walk through a tutorial implementation of MapReduce to inspire participants in their own efforts. This blog presents the main segments of the talk.

· Database Zone
Free Resource

Whether you work in SQL Server Management Studio or Visual Studio, Redgate tools integrate with your existing infrastructure, enabling you to align DevOps for your applications with DevOps for your SQL Server databases. Discover true Database DevOps, brought to you in partnership with Redgate.

We recently had the opportunity to present a talk on the MapReduce programming paradigm for the Raleigh-Durham chapter of Girl Develop It at Syncfusion’s office in Morrisville, NC.

The primary goal was to walk through a tutorial implementation of MapReduce to inspire participants in their own efforts.

This blog presents the main segments of the talk. If you would like to watch a video of the entire session, you can find the recording on our YouTube channel.

The Need for a Cluster to Store and Process Data

Data is becoming more plentiful and, in order to process and store such data in a scalable manner, we need to leverage the resources of a cluster of machines instead of relying on a single machine, however powerful that single machine may be.

Working with a cluster of machines does pose its own set of problems. The MapReduce programming paradigm was used by Google to successfully process data across a cluster of machines over a decade ago. They documented their experience in several papers:

These papers were later used as a guide to implement MapReduce in Apache Hadoop.

Our goal is to take a simple scenario and, by considering how to implement a processing solution in a scalable manner, arrive at theMapReduce programming model. The hope is that such a walk-through will lead to a deeper understanding of the internals of MapReduce.

Scala Syntax Introduction

The sample code provided is written in Scala. The essential syntax can be found on GitHub: github.com/danjebaraj/hadoopmr/blob/master/scalaquick/src/quickstart.sc.

The Scala worksheet linked above contains all the Scala syntax we would need to get started. Note that the objective of the talk and associated code is not really to write idiomatic Scala. In fact, we avoid using Map and Reduce and instead stick to using loops to build toward the MapReduceprogramming model.

Running the sample Scala worksheet requires you to install the Scala IDE from Scala-ide.org. The ScalaIDE is also useful when working with the other samples, but any other IDE, such as Eclipse or IntelliJ, will also work well.

The Challenge

We have sample data in the following table.

Person’s name

Favorite beverage

Number of cups person drinks  per day









The goal is to take the data in this table and compute the average number of cups of each beverage consumed per day across the data set.

This is a simple problem that is along the same lines as the commonly used WordCountproblem. Our motivation for using this data was that it readily mapped to gathering and computing the same information across people in the audience.

The Implementation

Our first attempt at computing summaries uses a simple,single-threaded solution. These are the steps we follow:

  1. We loop through and transform the data, removing data that we don’t need. In this instance, we don’t need the name of each personas; we are simply interested in beverages and cups consumed.
  2. The resulting data set is a list of tuples with each tuple containing a beverage name and cups consumed.
  3. We group the data by beverage.
  4. The grouped data is then summarized per group to produce results.

The complete code for these steps is hosted on GitHub: github.com/danjebaraj/hadoopmr/blob/master/cupcount/src/main/scala/Driver.scala.

Can we make the code parallel?

In order to make the code parallel, we must consider the issue of shared state. In general, shared state is trouble and should be avoided where possible. It cannot, of course, be totally avoided, but if we are deliberate in the way we design our program we can push most of the complexity down into libraries that handle most of the heavy lifting. The key idea is to make the program concurrent by design. For more details on concurrency, refer to this article on The Go Blog.

To illustrate issues with shared state, we have a simple sample available at github.com/danjebaraj/hadoopmr/blob/master/sharedstate/src/main/scala/Driver.scala.You can safely skip this sample if you are already familiar with the issues surrounding shared state.

We use the Akka library to break up the program into concurrent units. Akka works through a message processing paradigm. The core idea is that a program can be broken into independent units of work that communicate by sending each other messages with no state shared between them. In Akka terms, these concurrent units are referred to as actors. Actors can communicate through messages but do not typically share any state.

Akka makes it easy to create a pool of actors to service requests by using more than one instance of an actor. We use the RoundRobinPoolfor this purpose.

We explain Akka fundamentals using a sample that breaks up a large list of numbers into several partitions that are then summed by different units of work. The result is then summarized for ultimate presentation to the user.

Code for this section is available on GitHub: github.com/danjebaraj/hadoopmr/blob/master/akkaquick/src/main/scala/Driver.scala.

A Parallel Implementation of Cup Count Using Akka

We can easily make the first operation, the transformation of data by dropping names, parallel. We essentially break the data into blocks and send each block to be worked on independently by Akka actors. Each Akka actor, named FileProcessor in this instance, will process the blocks that it is sent and then report back to the sender with transformed information. Since the data is stored across files, we simply treat each file as a block for convenience.

The processed blocks are then received by an actor named Aggregator. The Aggregator groups and summarizes the data just as it would as a single threaded program:


Can We Do Better?

You will notice that the grouping and summarization operations happen within a single thread in our first attempt at a parallel model. We can easily make the summary calculation itself parallel since aggregating all the data for, say, tea does not have anything to do with the data for coffee. This is what we do in our second take. We group the data as before and then pass it on to a group of actors for the calculation of summaries. Here, the summarizer actor simply computes summaries for each group and reports them back:


The Final Iteration: MapReduce

At this point, we have a working implementation ofMapReduce.

Our  sample


The Map operation transforms the data from the available form into the desired form. In this instance, we perform a one-to-one mapping, creating exactly one output record per input record. This, however, is not a requirement. We can just as easily generate more than one record per input record. The only requirement is that there is a key and a value per record.

Map operation (user-provided code). The MapReduce framework handles the partitioning of data into blocks.

The code then groups by key. This internally involves sorting. The grouped data is then provided to different aggregators, or reducers as they are known under MapReduce.

Shuffle: developer.yahoo.com/hadoop/tutorial/module4.html#dataflow.

The reducer code receives all the data per key (group) and aggregates it into the desired value.

Reduce (user-provided code).

The final code can be found on GitHub:

Other Important Elements

Minimizing Network Traffic

Apache Hadoop’s implementation of storage (the Hadoop Distributed File System, HDFS) is designed to minimize network traffic. Chunks of data are replicated on multiple machines. When a MapReduce program is initialized for execution, the code that performs the Map operation is run on machines where the data is stored to the extent possible. Running locally may not always be possible. In such instances, data locality is still considered when scheduling the Map operation. The Map operation will be scheduled to run on a machine connected to the same rack as a machine that has a copy of the data before considering a machine on another rack. This allows the system to minimize the movement of data across the network, thereby sidestepping a major bottleneck.

Handling Node Failure

In the real word, especially on a large cluster, machines can fail quite often. The Hadoop implementation of MapReduce does a lot of work beneath the surface to recover from failure by rescheduling jobs on other nodes where the same data is available. Additionally, if a node takes a long time to complete a job (slow nodes are known as stragglers), the framework can speculatively schedule execution on another node and cancel the original job if the newer job request completes earlier.

Complete Sample Code and Slides

Sample code: The complete set of samples is available at github.com/danjebaraj/hadoopmr. You need to install a recent JDK distribution and SBT to get started.

Slides: The slides used with the presentation can be viewed at www.slideshare.net/DanielJebaraj/mapreduce-succinctly. The talk itself was originally titled "Hadoop Succinctly," but it focused more on MapReduce than Apache Hadoop.

Run the samples and take a look at the code to get a solid working foundation of how MapReduce works under the hood.

Interested in Apache Hadoop?

If you are interested in working with Apache Hadoop onWindows, be sure to check out the Syncfusion Big Data Platform designed for Windows at www.syncfusion.com/products/big-data


Original article by Daniel Jebaraj

It’s easier than you think to extend DevOps practices to SQL Server with Redgate tools. Discover how to introduce true Database DevOps, brought to you in partnership with Redgate

map-reduce ,big data

Published at DZone with permission of Daniel Jebaraj. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}