Streaming with Apache Spark 2.0

DZone 's Guide to

Streaming with Apache Spark 2.0

Let's learn how to do streaming in Apache Spark 2.0 — Scala with examples included.

· Big Data Zone ·
Free Resource

Hello geeks! We discussed Apache Spark 2.0 with Hive in an earlier blog. Now I am going to describe how can we use spark to stream the data. First, we need to understand this new Spark Streaming architecture  .

Spark 2.0 simplified the API for Streaming and lets us to access stream data in form of DataFrame and DataSet. Hence with new architecture, we can process our streamed data according to our business logic with DataFrame. This is the simple concept behind above architecture.

So here we have two approach to use Spark Streaming programmetically:

  • by using predefined receiver , and
  • by creating Custom-Receiver

First, we will stream our data using predefined receiver.

Add the following dependencies:

  • “org.apache.spark” %% “spark-core” % “2.0.0”,
  • “org.apache.spark” %% “spark-sql” % “2.0.0”,
  • “org.apache.spark” %% “spark-hive” % “2.0.0”,
  • “org.apache.spark” %% “spark-streaming” % “2.0.0”

Now as we know entry point of Spark in current version is SparkSession . So ,

val sparkSession = SparkSession.builder.master("local").appName("demo").getOrCreate()

Now you need stream receiver  :

val dataFrame : DataFrame = sparkSession.readStream.load("your/path")

Now we get the data of stream here we can perform our any bussines logic with dataframe.


Find the demo code here.

big data, java, jvm, scala, spark, spark streaming

Published at DZone with permission of Rahul Kumar , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}