Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Simple Word Count Program in Spark 2.0

DZone's Guide to

Simple Word Count Program in Spark 2.0

Big Data is getting bigger in 2017, so get started with Spark 2.0 now. This blog will give you a head start with an example of a word count program.

· Big Data Zone
Free Resource

Find your next Big Data job at DZone Jobs. See jobs focused on Big Data or create your profile and have the employers come to you!

In this blog we will write a very basic word count program in Spark 2.0 using IntelliJ and sbt, so lets get started. If you are not familiar with Spark 2.0, you can learn about it here.

Start up IntelliJ and create a new project. First, add the dependency for Spark 2.0 in your build.sbt from here:

libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.0.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.1"

Let the dependencies be resolved. Now add a text file in your resources folder in which we will apply our word count logic. Add an object in your main file named word_count_example.

Now you have to perform the given steps:

  • Create a spark session from org.apache.spark.sql.sparksession api and specify your master and app name
  • Using the sparksession.read.txt method, read from the file wordcount.txt the return value of this method in a dataset. In case you don't know what a data set looks like you can learn from this link.
  • Split this dataset of type string with white space and create a map which contains the occurence of each word in that data set.
  • Create a class prettyPrintMap for printing the result to console.
  • Below is the complete code:
import java.io.StringWriter

import org.apache.spark.sql.{Dataset, SparkSession}


object Word_Count_Example extends App {

  val sparkSession = SparkSession.builder.
    master("local")
    .appName("Word_Count_Example")
    .getOrCreate()

  val stringWriter = new StringWriter()

  def getCurrentDirectory = new java.io.File(".").getCanonicalPath


  import sparkSession.implicits._

  try {
    val data: Dataset[String] = sparkSession.read.text(getCurrentDirectory + "/src/main/resources/wordCount.txt").as[String]
    val wordsMap = data.flatMap(value => value.split("\\s+")).
collect().toList.groupBy(identity).mapValues(_.size)
    println(wordsMap.prettyPrint)
  }

  catch {
    case exception: Exception => 
println(exception.printStackTrace())
  }

  implicit class PrettyPrintMap[K, V](val map: Map[K, V]) {
    def prettyPrint: PrettyPrintMap[K, V] = this

    override def toString: String = {
      val valuesString = toStringLines.mkString("\n")

      "Map (\n" + valuesString + "\n)"
    }
    def toStringLines = {
      map
        .flatMap{ case (k, v) => keyValueToString(k, v)}
        .map(indentLine)
    }

    def keyValueToString(key: K, value: V): Iterable[String] = {
      value match {
        case v: Map[_, _] => Iterable(key + " -> Map (") ++ v.prettyPrint.toStringLines ++ Iterable(")")
        case x => Iterable(key + " -> " + x.toString)
      }
    }

    def indentLine(line: String): String = {
      "\t" + line
    }

  }

}

In any case, you can also clone the code from GitHub.

Find your next Big Data job at DZone Jobs. See jobs focused on Big Data or create your profile and have the employers come to you!

Topics:
big data ,spark ,java ,intellij

Published at DZone with permission of Anubhav Tarar, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}