Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Word Count Programs: Spark vs. MapReduce

DZone's Guide to

Word Count Programs: Spark vs. MapReduce

Why use Spark for big data processing? Compare these two word count program examples.

· Big Data Zone
Free Resource

Access NoSQL and Big Data through SQL using standard drivers (ODBC, JDBC, ADO.NET). Free Download 

Apache Spark is a powerful data processing tool in the distributed computing arena.

Compared to Hadoop, it is 10x faster on disk and 100x faster in memory.

The only complexity with Apache Spark is that it is built on Scala, so there is a new learning curve involved, but take my word for it: it is very easy and if you are a developer from a Java background you will really like it. :)

Today I will compare simple Word Count examples which are implemented by using both MapReduce and Spark.

Word Count Example (MapReduce)

     package org.myorg;

     import java.io.IOException;
     import java.util.*;

     import org.apache.hadoop.fs.Path;
     import org.apache.hadoop.conf.*;
     import org.apache.hadoop.io.*;
     import org.apache.hadoop.mapred.*;
     import org.apache.hadoop.util.*;

     public class WordCount {

        public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
          private final static IntWritable one = new IntWritable(1);
          private Text word = new Text();

          public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
            String line = value.toString();
            StringTokenizer tokenizer = new StringTokenizer(line);
            while (tokenizer.hasMoreTokens()) {
              word.set(tokenizer.nextToken());
              output.collect(word, one);
            }
          }
        }

        public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
          public void reduce(Text key, Iterator values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
            int sum = 0;
            while (values.hasNext()) {
              sum += values.next().get();
            }
            output.collect(key, new IntWritable(sum));
          }
        }

        public static void main(String[] args) throws Exception {
          JobConf conf = new JobConf(WordCount.class);
          conf.setJobName("wordcount");

          conf.setOutputKeyClass(Text.class);
          conf.setOutputValueClass(IntWritable.class);

          conf.setMapperClass(Map.class);
          conf.setCombinerClass(Reduce.class);
          conf.setReducerClass(Reduce.class);

          conf.setInputFormat(TextInputFormat.class);
          conf.setOutputFormat(TextOutputFormat.class);

          FileInputFormat.setInputPaths(conf, new Path(args[0]));
          FileOutputFormat.setOutputPath(conf, new Path(args[1]));

          JobClient.runJob(conf);
        }
     }

Word Count (Using Spark)

 val file = spark.textFile("hdfs://...")

 val counts = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)

 counts.saveAsTextFile("hdfs://...")

If you notice, it took 58 lines to implement WordCount program using MapReduce Paradigm but the same WordCount was just implemented in 3 lines using Spark.

So Spark is a really powerful data processing tool both from a latency and readability point of view.

Hope this article helps in giving clarity to users on why you should use Spark for big data processing.

The fastest databases need the fastest drivers - learn how you can leverage CData Drivers for high performance NoSQL & Big Data Access.

Topics:
big data ,spark ,mapreduce ,scala ,java

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}