How to Set Up Kafka
How to Set Up Kafka
In this article, I am going to explain how to install Kafka on Ubuntu. We will also look at the properties of a Kafka broker, socket server, and flush.
Join the DZone community and get the full member experience.Join For Free
The open source HPCC Systems platform is a proven, easy to use solution for managing data at scale. Visit our Easy Guide to learn more about this completely free platform, test drive some code in the online Playground, and get started today.
Kafka is one of the most popular publisher-subscriber models written in Java and Scala. It was originally developed by LinkedIn and later open-sourced. Kafka is known for handling heavy loads, i.e. I/O operations. You can find out more about Kafka here.
In this article, I am going to explain how to install Kafka on Ubuntu. To install Kafka, Java must be installed on your system. It is a must to set up ZooKeeper for Kafka. ZooKeeper performs many tasks for Kafka but in short, we can say that ZooKeeper manages the Kafka cluster state.
Download ZooKeeper from here.
Unzip the file. Inside the conf directory, rename the file zoo_sample.cfgas zoo.cfg.
The zoo.cfg file keeps configuration for ZooKeeper, i.e. on which port the ZooKeeper instance will listen, data directory, etc.
The default listen port is 2181. You can change this port by changing
The default data directory is /tmp/data. Change this, as you will not want ZooKeeper's data to be deleted after some random timeframe. Create a folder with the name data in the ZooKeeper directory and change the
Go to the bin directory.
Start ZooKeeper by executing the command
Stop ZooKeeper by stopping the command
Download the latest stable version of Kafka from here.
Unzip this file. The Kafka instance (Broker) configurations are kept in the config directory.
Go to the config directory. Open the file server.properties.
Remove the comment from listeners property, i.e.
listeners=PLAINTEXT://:9092. The Kafka broker will listen on port 9092.
Change log.dirs to /kafka_home_directory/kafka-logs.
zookeeper.connectproperty and change it as per your needs. The Kafka broker will connect to this ZooKeeper instance.
Go to the Kafka home directory and execute the command
Stop the Kafka broker through the command
Kafka Broker Properties
For beginners, the default configurations of the Kafka broker are good enough, but for production-level setup, one must understand each configuration. I am going to explain some of these configurations.
broker.id: The ID of the broker instance in a cluster.
zookeeper.connect: The ZooKeeper address (can list multiple addresses comma-separated for the ZooKeeper cluster). Example:
zookeeper.connection.timeout.ms: Time to wait before going down if, for some reason, the broker is not able to connect.
Socket Server Properties
socket.send.buffer.bytes: The send buffer used by the socket server.
socket.receive.buffer.bytes: The socket server receives a buffer for network requests.
socket.request.max.bytes: The maximum request size the server will allow. This prevents the server from running out of memory.
Each arriving message at the Kafka broker is written into a segment file. The catch here is that this data is not written to the disk directly. It is buffered first. The below two properties define when data will be flushed to disk. Very large flush intervals may lead to latency spikes when the flush happens and a very small flush interval may lead to excessive seeks.
log.flush.interval.messages: Threshold for message count that is once reached all messages are flushed to the disk.
log.flush.interval.ms: Periodic time interval after which all messages will be flushed into the disk.
As discussed above, messages are written into a segment file. The following policies define when these files will be removed.
log.retention.hours: The minimum age of the segment file to be eligible for deletion due to age.
log.retention.bytes: A size-based retention policy for logs. Segments are pruned from the log unless the remaining segments drop below
log.segment.bytes: Size of the segment after which a new segment will be created.
log.retention.check.interval.ms: Periodic time interval after which log segments are checked for deletion as per the retention policy. If both retention policies are set, then segments are deleted when either criterion is met.
Opinions expressed by DZone contributors are their own.