DZone
Big Data Zone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
  • Refcardz
  • Trend Reports
  • Webinars
  • Zones
  • |
    • Agile
    • AI
    • Big Data
    • Cloud
    • Database
    • DevOps
    • Integration
    • IoT
    • Java
    • Microservices
    • Open Source
    • Performance
    • Security
    • Web Dev
DZone > Big Data Zone > Word Count Programs: Spark vs. MapReduce

Word Count Programs: Spark vs. MapReduce

Why use Spark for big data processing? Compare these two word count program examples.

Shiv Shet user avatar by
Shiv Shet
·
Mar. 23, 16 · Big Data Zone · Tutorial
Like (6)
Save
Tweet
23.42K Views

Join the DZone community and get the full member experience.

Join For Free

Apache Spark is a powerful data processing tool in the distributed computing arena.

Compared to Hadoop, it is 10x faster on disk and 100x faster in memory.

The only complexity with Apache Spark is that it is built on Scala, so there is a new learning curve involved, but take my word for it: it is very easy and if you are a developer from a Java background you will really like it. :)

Today I will compare simple Word Count examples which are implemented by using both MapReduce and Spark.

Word Count Example (MapReduce)

     package org.myorg;

     import java.io.IOException;
     import java.util.*;

     import org.apache.hadoop.fs.Path;
     import org.apache.hadoop.conf.*;
     import org.apache.hadoop.io.*;
     import org.apache.hadoop.mapred.*;
     import org.apache.hadoop.util.*;

     public class WordCount {

        public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
          private final static IntWritable one = new IntWritable(1);
          private Text word = new Text();

          public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
            String line = value.toString();
            StringTokenizer tokenizer = new StringTokenizer(line);
            while (tokenizer.hasMoreTokens()) {
              word.set(tokenizer.nextToken());
              output.collect(word, one);
            }
          }
        }

        public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
          public void reduce(Text key, Iterator values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
            int sum = 0;
            while (values.hasNext()) {
              sum += values.next().get();
            }
            output.collect(key, new IntWritable(sum));
          }
        }

        public static void main(String[] args) throws Exception {
          JobConf conf = new JobConf(WordCount.class);
          conf.setJobName("wordcount");

          conf.setOutputKeyClass(Text.class);
          conf.setOutputValueClass(IntWritable.class);

          conf.setMapperClass(Map.class);
          conf.setCombinerClass(Reduce.class);
          conf.setReducerClass(Reduce.class);

          conf.setInputFormat(TextInputFormat.class);
          conf.setOutputFormat(TextOutputFormat.class);

          FileInputFormat.setInputPaths(conf, new Path(args[0]));
          FileOutputFormat.setOutputPath(conf, new Path(args[1]));

          JobClient.runJob(conf);
        }
     }

Word Count (Using Spark)

 val file = spark.textFile("hdfs://...")

 val counts = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)

 counts.saveAsTextFile("hdfs://...")

If you notice, it took 58 lines to implement WordCount program using MapReduce Paradigm but the same WordCount was just implemented in 3 lines using Spark.

So Spark is a really powerful data processing tool both from a latency and readability point of view.

Hope this article helps in giving clarity to users on why you should use Spark for big data processing.

MapReduce Big data

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Cloud-Based Integrations vs. On-Premise Models
  • Image Classification Using SingleStore DB, Keras, and Tensorflow
  • Message Queuing and the Database: Solving the Dual Write Problem
  • Building a Login Screen With React and Bootstrap

Comments

Big Data Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • MVB Program
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends:

DZone.com is powered by 

AnswerHub logo