DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Building a Rust Command Line Interface to Chat With Llama 3.2
  • Terraform Best Practices: The 24 Practices You Should Adopt
  • React, Angular, and Vue.js: What’s the Technical Difference?
  • GitOps: Flux vs Argo CD

Trending

  • How To Develop a Truly Performant Mobile Application in 2025: A Case for Android
  • Fixing Common Oracle Database Problems
  • AI-Powered Professor Rating Assistant With RAG and Pinecone
  • A Guide to Container Runtimes
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Apache Kafka: Basic Setup and Usage With Command-Line Interface

Apache Kafka: Basic Setup and Usage With Command-Line Interface

In this article, we are going to learn basic commands in Kafka and learn how to run Kafka Broker

By 
Chandra Shekhar Pandey user avatar
Chandra Shekhar Pandey
·
Aug. 20, 19 · Tutorial
Likes (5)
Comment
Save
Tweet
Share
39.4K Views

Join the DZone community and get the full member experience.

Join For Free

In this article, we are going to learn basic commands in Kafka. With these commands, we will be able to gain basic knowledge of how to run Kafka Broker and produce and consume messages, topic details, and offset details.

Just note that this is a standalone setup in order to get an overview of basic setup and functionality using the command-line interface.

So let us quickly go through these commands:

1. Download Kafka first. At the time of writing this article, Kafka version 2.3.0 is the latest. It can be downloaded from Apache Kafka.

2. Extract the downloaded artifact with command. After extracting, we will get a folder named kafka_2.11-2.3.0.

tar xvf kafka_2.11-2.3.0.tgz

3. Change directory to kafka_2.11-2.3.0/bin.

4. Start the Zookeeper server first. It is a must to have a Zookeeper instance running before we actually run Kafka Broker.

./zookeeper-server-start.sh ../config/zookeeper.properties

5. Once the Zookeeper server is started, start Kafka Broker with the following command:

./kafka-server-start.sh ../config/server.properties

6. Now create a topic called 'csptest' with two partitions.

./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 2 --topic csptest

7. Now start two listeners on topic csptest. The same command can be used in two different terminals. With two listeners, we will be able to consume from both partitions. Just note group is set to topic_group for both listeners.

./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

8. Now start a producer/publisher with the following command. Then produce 5 messages.

./kafka-console-producer.sh --broker-list localhost:9092 --topic csptest

>msg-1

>msg-2

>msg-3

>msg-4

>msg-5

9. We will find that in the terminals of both listeners, the messages being consumed are in roundrobin.

$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

msg-2

msg-4



$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

msg-1

msg-3

msg-5

10. Now let's get the details of the topic, like partition count, leader, and replicas. These details are more helpful when we have a clustered environment.

$ ./kafka-topics.sh --describe --zookeeper localhost:2181 --topic csptest

Topic:csptestPartitionCount:2ReplicationFactor:1Configs:

Topic: csptestPartition: 0Leader: 0Replicas: 0Isr: 0

Topic: csptestPartition: 1Leader: 0Replicas: 0Isr: 0

11. We can get consumer details and offset details for each partition with the following command:

$ ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group topic_group --describe



GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID

topic_group csptest 0 3 3 0 consumer-1-379adec4-08e7-4a13-8e26-91c4fe10a3a8 /127.0.0.1 consumer-1

topic_group csptest 1 2 2 0 consumer-1-85381523-5103-4bd0-a523-4ca09f41a6a7 /127.0.0.1 consumer-1

12. We can list all topics with commands.

$ ./kafka-topics.sh --list --zookeeper localhost:2181

__consumer_offsets

csptest

my-topic

13. Topic _consumer_offsets, which is the default and is already available in Kafka Broker store, offsets information in Broker. With the following command, we can browse this topic.

$ ./kafka-console-consumer.sh --formatter "kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter" --bootstrap-server localhost:9092 --topic __consumer_offsets

[topic_group,csptest,1]::OffsetAndMetadata(offset=2, leaderEpoch=Optional[0], metadata=, commitTimestamp=1566047971652, expireTimestamp=None)

[topic_group,csptest,0]::OffsetAndMetadata(offset=3, leaderEpoch=Optional[0], metadata=, commitTimestamp=1566047971655, expireTimestamp=None)

That's it, I hope you found it interesting and helpful.

kafka Command-line interface Command (computing) Interface (computing)

Opinions expressed by DZone contributors are their own.

Related

  • Building a Rust Command Line Interface to Chat With Llama 3.2
  • Terraform Best Practices: The 24 Practices You Should Adopt
  • React, Angular, and Vue.js: What’s the Technical Difference?
  • GitOps: Flux vs Argo CD

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!