DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Harnessing Real-Time Insights With Streaming SQL on Kafka
  • Control Your Services With OTEL, Jaeger, and Prometheus
  • How to Design Event Streams, Part 2
  • How to Design Event Streams, Part 1

Trending

  • AI Speaks for the World... But Whose Humanity Does It Learn From?
  • The Human Side of Logs: What Unstructured Data Is Trying to Tell You
  • Mastering Advanced Traffic Management in Multi-Cloud Kubernetes: Scaling With Multiple Istio Ingress Gateways
  • Navigating the LLM Landscape: A Comparative Analysis of Leading Large Language Models
  1. DZone
  2. Data Engineering
  3. Databases
  4. Join Semantics in Kafka Streams

Join Semantics in Kafka Streams

Kafka Streams is a client library used for building applications and microservices. Learn about the three major types of joins that it offers.

By 
Himani Arora user avatar
Himani Arora
·
Oct. 18, 17 · Tutorial
Likes (8)
Comment
Save
Tweet
Share
32.8K Views

Join the DZone community and get the full member experience.

Join For Free

Before getting started, let's get a brief introduction to some core topics:

  • Apache Kafka is a distributed streaming platform that enables you to publish and subscribe to a stream of records, also letting you process this stream of records as it occurs.
  • Kafka Streams is a client library used for building applications and microservices, where the input and output data are stored in Kafka clusters.
  • Interface KStream<K, V> is an abstraction of record stream of key-value pairs. It is defined from one or more Kafka topics that are consumed message by message or as a result of KStream transformation.
  • Interface KTable<K, V> is an abstraction of a changelog stream from a primary-keyed table. Each record in this stream is an update on the primary keyed table with the record key as the primary key. Like KStreams, it is defined from one or more Kafka topics that are consumed message-by-message or as a result of a KTable transformation.

Joins in Kafka Streams

Kafka Streams offer three types of joins:

  1. KStream-KStream join
  2. KTable-KTable join
  3. KStream-KTable join

KStream-KStream Join

This is a sliding window join, meaning that all tuples close to each other with regard to time are joined. Time here is the difference up to the size of the window.

These joins are always windowed joins because otherwise, the size of the internal state store used to perform the join would grow indefinitely.

In the following example, we perform an inner join between two streams. The output the joined stream will be of type KStream<K, ...>?.

Since KStream-KStream Join is always windowed joins, we must provide a join window. This can be given using:

JoinWindows.of(TimeUnit.MINUTES.toMillis(5))

KStream<String, String> joined = left.join(right,
    (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue, /* ValueJoiner */
    JoinWindows.of(TimeUnit.MINUTES.toMillis(5)),
    Serdes.String(), /* key */
    Serdes.Long(),   /* left value */
    Serdes.Double()  /* right value */
  );

KTable-KTable Join

KTable-KTable joins are designed to be consistent with their counterparts in relational databases. They are always non-windowed joins.

The changelog streams of KTables is materialized into local state stores that represent the latest snapshot of their tables. The join result is a new KTable representing changelog stream of the join operation.

In the following example, we will perform an inner join between two KTables. The result will be an updating KTable representing the current result of the join.

KTable<String, String> joined = left.join(right,
    (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue /* ValueJoiner */
  );

The join here is key-based — that is, leftRecord.key == rightRecord.key — and will be automatically triggered everytime a new input is received.

KStream-KTable Join

KStream-KTable joins are asymmetric non-window joins. They allow you to perform table lookups against a KTable everytime a new record is received from the KStream.

The KTable lookup is always done on the current state of the KTable; thus, out-of-order records can yield a non-deterministic result. The result of a KStream-KTable join is a KStream.

In the following example, we will perform an inner join of a KStream with a KTable, effectively doing a table lookup.

KStream<String, String> joined = left.join(right,
    (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue, /* ValueJoiner */
    Serdes.String(), /* key */
    Serdes.Long()    /* left value */
  );

References

  • Official Kafka Streams documentation

  • Confluent’s Kafka Streams documentation
Joins (concurrency library) kafka Stream (computing) Database Semantics (computer science)

Published at DZone with permission of Himani Arora, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Harnessing Real-Time Insights With Streaming SQL on Kafka
  • Control Your Services With OTEL, Jaeger, and Prometheus
  • How to Design Event Streams, Part 2
  • How to Design Event Streams, Part 1

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!