{{announcement.body}}
{{announcement.title}}

Apache Kafka: Basic Setup and Usage With Command-Line Interface

DZone 's Guide to

Apache Kafka: Basic Setup and Usage With Command-Line Interface

In this article, we are going to learn basic commands in Kafka and learn how to run Kafka Broker

· Big Data Zone ·
Free Resource

In this article, we are going to learn basic commands in Kafka. With these commands, we will be able to gain basic knowledge of how to run Kafka Broker and produce and consume messages, topic details, and offset details.

Just note that this is a standalone setup in order to get an overview of basic setup and functionality using the command-line interface.

So let us quickly go through these commands:

1. Download Kafka first. At the time of writing this article, Kafka version 2.3.0 is the latest. It can be downloaded from Apache Kafka.

2. Extract the downloaded artifact with command. After extracting, we will get a folder named kafka_2.11-2.3.0.

tar xvf kafka_2.11-2.3.0.tgz

3. Change directory to kafka_2.11-2.3.0/bin.

4. Start the Zookeeper server first. It is a must to have a Zookeeper instance running before we actually run Kafka Broker.

./zookeeper-server-start.sh ../config/zookeeper.properties

5. Once the Zookeeper server is started, start Kafka Broker with the following command:

./kafka-server-start.sh ../config/server.properties

6. Now create a topic called 'csptest' with two partitions.

./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 2 --topic csptest

7. Now start two listeners on topic csptest. The same command can be used in two different terminals. With two listeners, we will be able to consume from both partitions. Just note group is set to topic_group for both listeners.

./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

8. Now start a producer/publisher with the following command. Then produce 5 messages.

./kafka-console-producer.sh --broker-list localhost:9092 --topic csptest

>msg-1

>msg-2

>msg-3

>msg-4

>msg-5

9. We will find that in the terminals of both listeners, the messages being consumed are in roundrobin.

$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

msg-2

msg-4



$ ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group

msg-1

msg-3

msg-5

10. Now let's get the details of the topic, like partition count, leader, and replicas. These details are more helpful when we have a clustered environment.

$ ./kafka-topics.sh --describe --zookeeper localhost:2181 --topic csptest

Topic:csptestPartitionCount:2ReplicationFactor:1Configs:

Topic: csptestPartition: 0Leader: 0Replicas: 0Isr: 0

Topic: csptestPartition: 1Leader: 0Replicas: 0Isr: 0

11. We can get consumer details and offset details for each partition with the following command:

$ ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group topic_group --describe



GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID

topic_group csptest 0 3 3 0 consumer-1-379adec4-08e7-4a13-8e26-91c4fe10a3a8 /127.0.0.1 consumer-1

topic_group csptest 1 2 2 0 consumer-1-85381523-5103-4bd0-a523-4ca09f41a6a7 /127.0.0.1 consumer-1

12. We can list all topics with commands.

$ ./kafka-topics.sh --list --zookeeper localhost:2181

__consumer_offsets

csptest

my-topic

13. Topic _consumer_offsets, which is the default and is already available in Kafka Broker store, offsets information in Broker. With the following command, we can browse this topic.

$ ./kafka-console-consumer.sh --formatter "kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter" --bootstrap-server localhost:9092 --topic __consumer_offsets

[topic_group,csptest,1]::OffsetAndMetadata(offset=2, leaderEpoch=Optional[0], metadata=, commitTimestamp=1566047971652, expireTimestamp=None)

[topic_group,csptest,0]::OffsetAndMetadata(offset=3, leaderEpoch=Optional[0], metadata=, commitTimestamp=1566047971655, expireTimestamp=None)

That's it, I hope you found it interesting and helpful.

Topics:
basic setup of apache kafka, command line, integration, kafka apache, kafka partitions, tutorial, zookeeper

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}