DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
Building Scalable Real-Time Apps with AstraDB and Vaadin
Register Now

Trending

  • Database Integration Tests With Spring Boot and Testcontainers
  • How Web3 Is Driving Social and Financial Empowerment
  • Knowing and Valuing Apache Kafka’s ISR (In-Sync Replicas)
  • Mainframe Development for the "No Mainframe" Generation

Trending

  • Database Integration Tests With Spring Boot and Testcontainers
  • How Web3 Is Driving Social and Financial Empowerment
  • Knowing and Valuing Apache Kafka’s ISR (In-Sync Replicas)
  • Mainframe Development for the "No Mainframe" Generation
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Apache Kafka: Basic Setup and Usage With Command-Line Interface

Apache Kafka: Basic Setup and Usage With Command-Line Interface

In this article, we are going to learn basic commands in Kafka and learn how to run Kafka Broker

Chandra Shekhar Pandey user avatar by
Chandra Shekhar Pandey
·
Aug. 20, 19 · Tutorial
Like (5)
Save
Tweet
Share
38.41K Views

Join the DZone community and get the full member experience.

Join For Free

In this article, we are going to learn basic commands in Kafka. With these commands, we will be able to gain basic knowledge of how to run Kafka Broker and produce and consume messages, topic details, and offset details.

Just note that this is a standalone setup in order to get an overview of basic setup and functionality using the command-line interface.

So let us quickly go through these commands:

1. Download Kafka first. At the time of writing this article, Kafka version 2.3.0 is the latest. It can be downloaded from Apache Kafka.

2. Extract the downloaded artifact with command. After extracting, we will get a folder named kafka_2.11-2.3.0.

tar xvf kafka_2.11-2.3.0.tgz

3. Change directory to kafka_2.11-2.3.0/bin.

4. Start the Zookeeper server first. It is a must to have a Zookeeper instance running before we actually run Kafka Broker.

./zookeeper-server-start.sh ../config/zookeeper.properties

5. Once the Zookeeper server is started, start Kafka Broker with the following command:

./kafka-server-start.sh ../config/server.properties

6. Now create a topic called 'csptest' with two partitions.

./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 2 --topic csptest

7. Now start two listeners on topic csptest. The same command can be used in two different terminals. With two listeners, we will be able to consume from both partitions. Just note group is set to topic_group for both listeners.

./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

8. Now start a producer/publisher with the following command. Then produce 5 messages.

./kafka-console-producer.sh --broker-list localhost:9092 --topic csptest

>msg-1

>msg-2

>msg-3

>msg-4

>msg-5

9. We will find that in the terminals of both listeners, the messages being consumed are in roundrobin.

$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

msg-2

msg-4



$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

msg-1

msg-3

msg-5

10. Now let's get the details of the topic, like partition count, leader, and replicas. These details are more helpful when we have a clustered environment.

$ ./kafka-topics.sh --describe --zookeeper localhost:2181 --topic csptest

Topic:csptestPartitionCount:2ReplicationFactor:1Configs:

Topic: csptestPartition: 0Leader: 0Replicas: 0Isr: 0

Topic: csptestPartition: 1Leader: 0Replicas: 0Isr: 0

11. We can get consumer details and offset details for each partition with the following command:

$ ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group topic_group --describe



GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID

topic_group csptest 0 3 3 0 consumer-1-379adec4-08e7-4a13-8e26-91c4fe10a3a8 /127.0.0.1 consumer-1

topic_group csptest 1 2 2 0 consumer-1-85381523-5103-4bd0-a523-4ca09f41a6a7 /127.0.0.1 consumer-1

12. We can list all topics with commands.

$ ./kafka-topics.sh --list --zookeeper localhost:2181

__consumer_offsets

csptest

my-topic

13. Topic _consumer_offsets, which is the default and is already available in Kafka Broker store, offsets information in Broker. With the following command, we can browse this topic.

$ ./kafka-console-consumer.sh --formatter "kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter" --bootstrap-server localhost:9092 --topic __consumer_offsets

[topic_group,csptest,1]::OffsetAndMetadata(offset=2, leaderEpoch=Optional[0], metadata=, commitTimestamp=1566047971652, expireTimestamp=None)

[topic_group,csptest,0]::OffsetAndMetadata(offset=3, leaderEpoch=Optional[0], metadata=, commitTimestamp=1566047971655, expireTimestamp=None)

That's it, I hope you found it interesting and helpful.

kafka Command-line interface Command (computing) Interface (computing)

Opinions expressed by DZone contributors are their own.

Trending

  • Database Integration Tests With Spring Boot and Testcontainers
  • How Web3 Is Driving Social and Financial Empowerment
  • Knowing and Valuing Apache Kafka’s ISR (In-Sync Replicas)
  • Mainframe Development for the "No Mainframe" Generation

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: