DZone
Big Data Zone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
  • Refcardz
  • Trend Reports
  • Webinars
  • Zones
  • |
    • Agile
    • AI
    • Big Data
    • Cloud
    • Database
    • DevOps
    • Integration
    • IoT
    • Java
    • Microservices
    • Open Source
    • Performance
    • Security
    • Web Dev
DZone > Big Data Zone > Simple Word Count Program in Spark 2.0

Simple Word Count Program in Spark 2.0

Big Data is getting bigger in 2017, so get started with Spark 2.0 now. This blog will give you a head start with an example of a word count program.

Anubhav Tarar user avatar by
Anubhav Tarar
·
Jan. 11, 17 · Big Data Zone · Tutorial
Like (1)
Save
Tweet
7.78K Views

Join the DZone community and get the full member experience.

Join For Free

In this blog we will write a very basic word count program in Spark 2.0 using IntelliJ and sbt, so lets get started. If you are not familiar with Spark 2.0, you can learn about it here.

Start up IntelliJ and create a new project. First, add the dependency for Spark 2.0 in your build.sbt from here:

libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.0.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.1"

Let the dependencies be resolved. Now add a text file in your resources folder in which we will apply our word count logic. Add an object in your main file named word_count_example.

Now you have to perform the given steps:

  • Create a spark session from org.apache.spark.sql.sparksession api and specify your master and app name
  • Using the sparksession.read.txt method, read from the file wordcount.txt the return value of this method in a dataset. In case you don't know what a data set looks like you can learn from this link.
  • Split this dataset of type string with white space and create a map which contains the occurence of each word in that data set.
  • Create a class prettyPrintMap for printing the result to console.
  • Below is the complete code:
import java.io.StringWriter

import org.apache.spark.sql.{Dataset, SparkSession}


object Word_Count_Example extends App {

  val sparkSession = SparkSession.builder.
    master("local")
    .appName("Word_Count_Example")
    .getOrCreate()

  val stringWriter = new StringWriter()

  def getCurrentDirectory = new java.io.File(".").getCanonicalPath


  import sparkSession.implicits._

  try {
    val data: Dataset[String] = sparkSession.read.text(getCurrentDirectory + "/src/main/resources/wordCount.txt").as[String]
    val wordsMap = data.flatMap(value => value.split("\\s+")).
collect().toList.groupBy(identity).mapValues(_.size)
    println(wordsMap.prettyPrint)
  }

  catch {
    case exception: Exception => 
println(exception.printStackTrace())
  }

  implicit class PrettyPrintMap[K, V](val map: Map[K, V]) {
    def prettyPrint: PrettyPrintMap[K, V] = this

    override def toString: String = {
      val valuesString = toStringLines.mkString("\n")

      "Map (\n" + valuesString + "\n)"
    }
    def toStringLines = {
      map
        .flatMap{ case (k, v) => keyValueToString(k, v)}
        .map(indentLine)
    }

    def keyValueToString(key: K, value: V): Iterable[String] = {
      value match {
        case v: Map[_, _] => Iterable(key + " -> Map (") ++ v.prettyPrint.toStringLines ++ Iterable(")")
        case x => Iterable(key + " -> " + x.toString)
      }
    }

    def indentLine(line: String): String = {
      "\t" + line
    }

  }

}

In any case, you can also clone the code from GitHub.

Data set Text file Data (computing) Dependency intellij app Printing Console (video game CLI)

Published at DZone with permission of Anubhav Tarar, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • What Is High Cardinality?
  • Common Types Of Network Security Vulnerabilities In 2022
  • 3 Best Tools to Implement Kubernetes Observability
  • 3 Reasons Why You Should Centralize Developer Tools, Processes, and People

Comments

Big Data Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • MVB Program
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends:

DZone.com is powered by 

AnswerHub logo