DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Securing and Monitoring Your Data Pipeline: Best Practices for Kafka, AWS RDS, Lambda, and API Gateway Integration
  • Automated Application Integration With Flask, Kakfa, and API Logic Server
  • Why Real-time Data Integration Is a Priority for Architects in the Modern Era
  • Integrate Oracle Database With Apache Kafka Using Debezium

Trending

  • Revolutionizing Financial Monitoring: Building a Team Dashboard With OpenObserve
  • My LLM Journey as a Software Engineer Exploring a New Domain
  • Breaking Bottlenecks: Applying the Theory of Constraints to Software Development
  • Hybrid Cloud vs Multi-Cloud: Choosing the Right Strategy for AI Scalability and Security
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Basic Example for Spark Structured Streaming and Kafka Integration

Basic Example for Spark Structured Streaming and Kafka Integration

With the newest Kafka consumer API, there are notable differences in usage. Learn how to integrate Spark Structured Streaming and Kafka using this new API.

By 
Ayush Tiwari user avatar
Ayush Tiwari
·
Sep. 26, 17 · Tutorial
Likes (17)
Comment
Save
Tweet
Share
64.1K Views

Join the DZone community and get the full member experience.

Join For Free

The Spark Streaming integration for Kafka 0.10 is similar in design to the 0.8 Direct Stream approach. It provides simple parallelism, 1:1 correspondence between Kafka partitions and Spark partitions, and access to offsets and metadata. However, because the newer integration uses the new Kafka consumer API instead of the simple API, there are notable differences in usage. This version of the integration is marked as experimental, so the API is potentially subject to change.

In this blog, I am going to implement a basic example on Spark Structured Streaming and Kafka integration.

Here, I am using:

  • Apache Spark 2.2.0
  • Apache Kafka 0.11.0.1
  • Scala 2.11.8

Create the built.sbt

Let’s create an sbt project and add following dependencies in build.sbt.

libraryDependencies ++= Seq("org.apache.spark" % "spark-sql_2.11" % "2.2.0",
                        "org.apache.spark" % "spark-sql-kafka-0-10_2.11" % "2.2.0",
                        "org.apache.kafka" % "kafka-clients" % "0.11.0.1")

Create the SparkSession

Now, we have to import the necessary classes and create a local SparkSession, the starting point of all functionalities in Spark:

val spark = SparkSession
 .builder
 .appName("Spark-Kafka-Integration")
 .master("local")
 .getOrCreate()

Define the Schema

We have to define the schema for our data that we are going to read from the CSV.

val mySchema = StructType(Array(
 StructField("id", IntegerType),
 StructField("name", StringType),
 StructField("year", IntegerType),
 StructField("rating", DoubleType),
 StructField("duration", IntegerType)
))

A sample of my CSV file can be found here and the dataset description is given here.

Create the Streaming DataFrame

Now, we have to create a streaming DataFrame with schema defined in a variable called mySchema. If you drop any CSV file into dir, that will automatically change in the streaming DataFrame.

val streamingDataFrame = spark.readStream.schema(mySchema).csv("path of your directory like home/Desktop/dir/")

Publish the Stream to Kafka

streamingDataFrame.selectExpr("CAST(id AS STRING) AS key", "to_json(struct(*)) AS value").
  writeStream
  .format("kafka")
  .option("topic", "topicName")
  .option("kafka.bootstrap.servers", "localhost:9092")
  .option("checkpointLocation", "path to your local dir")
  .start()

Create the topic called topicName for Kafka and send DataFrame with that topic to Kafka. Here, 9092 is the port number of the local system on which Kafka in running. We use checkpointLocation to create the offsets about the stream.

Subscribe the Stream From Kafka

import spark.implicits._
val df = spark
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "localhost:9092")
  .option("subscribe", "topicName")
  .load()

At this point, we just subscribe our stream from Kafka with the same topic name that we gave above.

Convert Stream According to mySchema and TimeStamp

val df1 = df.selectExpr("CAST(value AS STRING)", "CAST(timestamp AS TIMESTAMP)").as[(String, Timestamp)]
  .select(from_json($"value", mySchema).as("data"), $"timestamp")
  .select("data.*", "timestamp")

Here, we convert the data that is coming in the Stream from Kafka to JSON, and from JSON, we just create the DataFrame as per our needs described in mySchema. We also take the timestamp column.

Print the DataFrame on Console

Here, we just print our data to the console.

df1.writeStream
    .format("console")
    .option("truncate","false")
    .start()
    .awaitTermination()

For more details, you can refer to this documentation.

kafka Integration

Published at DZone with permission of Ayush Tiwari, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Securing and Monitoring Your Data Pipeline: Best Practices for Kafka, AWS RDS, Lambda, and API Gateway Integration
  • Automated Application Integration With Flask, Kakfa, and API Logic Server
  • Why Real-time Data Integration Is a Priority for Architects in the Modern Era
  • Integrate Oracle Database With Apache Kafka Using Debezium

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!