DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Applying Kappa Architecture to Make Data Available Where It Matters
  • Designing High-Volume Systems Using Event-Driven Architectures
  • How To Build a Real-Time, Event-Driven Information System
  • How to Integrate Event-Driven Ansible With Kafka

Trending

  • The Cypress Edge: Next-Level Testing Strategies for React Developers
  • Event-Driven Architectures: Designing Scalable and Resilient Cloud Solutions
  • How to Convert XLS to XLSX in Java
  • Measuring the Impact of AI on Software Engineering Productivity
  1. DZone
  2. Data Engineering
  3. Big Data
  4. How to Create — and Configure — Apache Kafka Consumers

How to Create — and Configure — Apache Kafka Consumers

Elevate your open-source Apache Kafka deployment by better understanding how Kafka consumers work. Use this guide to create and configure your consumers.

By 
Anil Inamdar user avatar
Anil Inamdar
·
Feb. 06, 24 · Tutorial
Likes (4)
Comment
Save
Tweet
Share
3.9K Views

Join the DZone community and get the full member experience.

Join For Free

Apache Kafka’s real-time data processing relies on Kafka consumers (more background here) that read messages within its infrastructure. Producers publish messages to Kafka topics, and consumers — often part of a consumer group — subscribe to these topics for real-time message reception. A consumer tracks its position in the queue using an offset. To configure a consumer, developers create one with the appropriate group ID, prior offset, and details. They then implement a loop for the consumer to process arriving messages efficiently. 

It’s an important understanding for any organization using Kafka in its 100% open-source, enterprise-ready version — and here’s what to know.

Example: Creating a Kafka Consumer

The process of creating and configuring Kafka consumers follows consistent principles across programming languages, with a few language-specific nuances. This example illustrates the fundamental steps for creating a Kafka consumer in Java.

Start by crafting a properties file. While programmatic approaches are feasible, it's advisable to use a properties file. In the code below, substitute 'MYKAFKAIPADDRESS1' with the actual IP addresses of your Kafka brokers:

Java
 
bootstrap.servers=MYKAFKAIPADDRESS1:9092, MYKAFKAIPADDRESS2:9092, MYKAF KAIPADDRESS3:9092

key.deserializer=org.apache.kafka.common.serialization. StringDeserializer

value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

group.id=my-group

security.protocol=SASL_PLAINTEXT

sasl.mechanism=SCRAM-SHA-256

sasl.jaas.config=org.apache.kafka.common.security.scram. ScramLoginModule required \

username="[USER NAME]" \

password="[USER PASSWORD]";

The next step is creating the consumer. This example code prepares the main program entry point, as well as the necessary message processing loop:

Java
 
import org.apache.kafka.clients.consumer.KafkaConsumer;

import org.apache.kafka.clients.consumer. Consumer Record;

import org.apache.kafka.clients.consumer. Consumer Records;

import java.io.FileReader;

import java.io.IOException;

import java.time. Duration;

import java.util.Collections;

import java.util.Properties;

public class Consumer {

public static void main(String[] args) {

Properties kafkaProps = new Properties();

try (FileReader fileReader = new FileReader ("consumer.properties")) {

kafkaProps.load(fileReader);

} catch (IOException e) {

e.printStackTrace();

}

try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(kafkaProps)) {

consumer.subscribe (Collections.singleton("test"));

while (true) {

Consumer Records<String, String> records = consumer.poll (Duration.ofMillis(100));

for (Consumer Record<String, String>; record records) {

System.out.println(String.format("topic = %s, partition = %s, offset = %d, key = %s, value = %",

record.topic (), record.partition(), record.offset(), record.key(), record.value()));

}

}

}

}

Commonly Used Kafka Consumer Configuration Options

With the foundational setup complete, developers have a range of powerful options to fine-tune Kafka consumers according to their preferences. The following summaries highlight commonly used options, while your Kafka driver documentation provides a comprehensive list of configurations.

Some of the most popular Kafka consumer configuration options include:

  • client.id – Identifies the client (consumer) to brokers in the Kafka cluster, to make clear which consumer made which requests. As a best practice, consumers in a consumer group should each have the same client ID, enabling client quota enforcement for that group.
  • session.timeout.ms – This value controls how long a broker will listen for a consumer’s heartbeat before declaring it dead. The default value is 10 seconds. 
  • heartbeat.interval.ms – This value controls how often a consumer sends a heartbeat to let the broker know it’s alive and functioning. The default value is 3 seconds. 
  • Note that session.timeout.ms and heartbeat.interval.ms must work together to let the broker know a consumer’s status. As a best practice, the consumer should send several heartbeats per timeout interval. For example, the default settings of a heartbeat every 3 seconds and a 10-second timeout offer a healthy strategy.
  • max.poll.interval.ms – This sets the length of time that a broker will wait between poll method calls, and prompts a consumer to attempt to receive further messages before declaring it dead. The default value is 300 seconds.
  • enable.auto.commit – Tells the consumer to automatically commit periodic offsets (at an interval determined by auto.commit.interval.ms). This configuration is enabled by default. 
  • fetch.min.bytes – This sets the minimum data amount that a consumer fetches from a broker. A consumer will wait for more data if the available data doesn’t fulfill the set amount. This approach minimizes back and forth between consumers and brokers, boosting throughput at the cost of latency.  
  • fetch.max.wait.ms – Used together with fetch.min.bytes, this sets the maximum time that a consumer will wait before then fetching messages from the broker.
  • max.partition.fetch.bytes – This sets the maximum bytes a consumer will fetch per partition, effectively placing an upper limit on memory requirements for fetching messages. It’s important to be mindful of max.poll.interval when choosing this value: an overly large maximum can make it so there’s more data to deliver than consumers can fetch during the poll interval timespan.
  • auto.offset.reset – This tells the consumer what to do if it reads a partition with no last read offset available. This can be set to “latest” or “earliest”, telling the consumer to start either with the latest available message, or the earliest available offset. 
  • partition.assignment.strategy – This tells the group leader consumer how to assign partitions to consumers in its consumer group.
  • max.poll.records – This sets the maximum records a consumer will fetch in a poll call, offering control over throughput. 

Unlock Apache Kafka’s Potential

As a reliable, high-performance, hyperscale real-time data streaming solution, open-source Apache Kafka offers tremendous opportunities. By understanding how to create and configure Kafka consumers, developers can more closely control the solution’s behavior and get more out of their Kafka deployment.

Data processing Open source kafka Event

Opinions expressed by DZone contributors are their own.

Related

  • Applying Kappa Architecture to Make Data Available Where It Matters
  • Designing High-Volume Systems Using Event-Driven Architectures
  • How To Build a Real-Time, Event-Driven Information System
  • How to Integrate Event-Driven Ansible With Kafka

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!