Hello geeks! We discussed Apache Spark 2.0 with Hive in an earlier blog. Now I am going to describe how can we use spark to stream the data. First, we need to understand this new Spark Streaming architecture .
Spark 2.0 simplified the API for Streaming and lets us to access stream data in form of DataFrame and DataSet. Hence with new architecture, we can process our streamed data according to our business logic with DataFrame. This is the simple concept behind above architecture.
So here we have two approach to use Spark Streaming programmetically:
- by using predefined receiver , and
- by creating Custom-Receiver
First, we will stream our data using predefined receiver.
Add the following dependencies:
- “org.apache.spark” %% “spark-core” % “2.0.0”,
- “org.apache.spark” %% “spark-sql” % “2.0.0”,
- “org.apache.spark” %% “spark-hive” % “2.0.0”,
- “org.apache.spark” %% “spark-streaming” % “2.0.0”
Now as we know entry point of Spark in current version is SparkSession . So ,
val sparkSession = SparkSession.builder.master(&amp;amp;quot;local&amp;amp;quot;).appName(&amp;amp;quot;demo&amp;amp;quot;).getOrCreate()
Now you need stream receiver :
val dataFrame : DataFrame = sparkSession.readStream.load(&amp;amp;quot;your/path&amp;amp;quot;)
Now we get the data of stream here we can perform our any bussines logic with dataframe.
Find the demo code here.