DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modern Digital Website Security: Prepare to face any form of malicious web activity and enable your sites to optimally serve your customers.

Containers Trend Report: Explore the current state of containers, containerization strategies, and modernizing architecture.

Low-Code Development: Learn the concepts of low code, features + use cases for professional devs, and the low-code implementation process.

E-Commerce Development Essentials: Considering starting or working on an e-commerce business? Learn how to create a backend that scales.

Related

  • How To Install CMAK, Apache Kafka, Java 18, and Java 19 [Video Tutorials]
  • Event Mesh: Point-to-Point EDA
  • Kafka Fail-Over Using Quarkus Reactive Messaging
  • Next-Gen Data Pipes With Spark, Kafka and k8s

Trending

  • CodeCraft: Agile Strategies for Crafting Exemplary Software
  • Announcing DZone Core 2.0!
  • What’s New Between Java 17 and Java 21?
  • Apply Strangler Pattern To Decompose Legacy System Into Microservices: Part 1
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Kafka Monitoring With Burrow

Kafka Monitoring With Burrow

In this post, we take a look at how to monitor Kafka clusters with using Burrow (an open source big data tool) on Ubuntu.

Gaurav Garg user avatar by
Gaurav Garg
·
May. 30, 18 · Tutorial
Like (6)
Save
Tweet
Share
38.4K Views

Join the DZone community and get the full member experience.

Join For Free

In my previous articles, I discussed how to set up a Kafka Cluster and a Zookeeper Cluster. In this article, we will see how to monitor Kafka clusters with the help of Burrow on Ubuntu.

Introduction

According to Burrow's GitHub page: Burrow is a Kafka monitoring tool that keeps track of consumer lag. It does not provide any user interface to monitor. It provides several HTTP request endpoints to get information about Kafka clusters and consumer groups. Burrow also has a notifier system that can notify you (via email or at an HTTP endpoint) if a consumer group has met certain criteria.

Burrow is designed in a modular way that separates the work done into multiple subsystems. Below are the subsystems in Burrow.

  • Clusters: This component periodically updates the topic list and the last committed offset for each partition.

  • Consumers: This component fetches the information about consumer groups like consumer lag, etc.

  • Storage: This component stores all the information in a system.

  • Evaluator: Gets information from storage and checks the status of consumer groups, like if it's consuming messages at a slow rate using consumer lag evaluation rules.

  • Notifier: Requests the status of a consumer group and sends a notification if certain criteria are met via email, etc.

  • HTTP server: Provides HTTP endpoints to fetch information about a cluster and consumers.

Burrow Installation

Burrow is written in the Go language. So you need to install Go on your machine. Follow this article https://golang.org/doc/install to install Go on your machine. Also, you need to install a Go dependency management tool that will fetch dependencies required for Burrow.

  • Execute the command: go get github.com/linkedin/Burrow.It will create a Go folder in your home directory and fetch the burrow source code into it. Move to the go folder inside your home directory.

  • Change the directory to src/github.com/linkedin/Burrow. 

  • Execute the command: dep ensure.
  • Execute the command: go install. This command will create an executable in the bin directory inside the Go folder.

Burrow Configuration

Burrow supports configurations provided in multiple formats. This includes TOML, JSON, and YAML. I will discuss how to provide a configuration in TOML format. Burrow configuration is divided into multiple sections. We will discuss each section.

General: This heading specifies the location of PID files as well as an optional place to put the stdout/stderr output.

[general]
pidfile="burrow.pid"  # file in which id of running process will be stored
stdout-logfile="burrow.out" #file path and name to redirect stdout and stderr into.
access-control-allow-origin="*" #Header value to put in http response

Logging: This heading specifies the configuration for logging. If no filename config is provided, all logs are written to stdout.

[logging]
filename="logs/burrow.log" #Path and file name to write logs into
level="info" #log level
maxsize=100 #maximum size of single log file in MB
maxbackups=30 #maximum number of log file to maintain
maxage=10 #maximum time to keep log file 
use-localtime=false  #time to be used while writing logs
use-compression=true #if true compress rotated log files.

Zookeeper: This heading specifies the location of Zookeeper ensembles to use in order to store metadata for modules and provide synchronization between multiple copies of files.

[zookeeper]
servers=["zkhost01.example.com:2181", "zkhost02.example.com:2181", "zkhost03.example.com:2181" ]
timeout=6 #expiration timeout for zookeeper sessions.
root-path=/mypath/burrow #full path to zookeeper node that burrow will be allowed to write into.

Client Profile: Profiles are used to group configurations so that the same configuration can be used with that profile name. The Client Profile heading is followed by a subheading (profile name) that can be used in other parts of the configuration. Using this profile, we can group together a client version, TLS profile, and SASL profile.

[client-profile.myclient] #this client profile name is myclient
kafka-version="1.1.0"    #kafka server version
client-id="burrow-myclient" # a string to be passed to kafka as client Id

HTTPServer: This heading configures an HTTP server in Burrow.

[httpserver.mylistener]
address=":8080"  #port to send http request

Storage: This heading configures a storage subsystem in Burrow. It must have a unique subheading associated with it.

[storage.mystorage]
class-name="inmemory"
intervals=10  #no. of offsets to store for each partition
expire-group=604800 # no. of seconds after which a group will be purged if it has not commited offset

Clusters: This heading configures a single Kafka cluster to fetch topic lists and offset information. This heading must be defined using a subheading which will be used in other parts of configuration, for example, cluster.mycluster here is the mycluster subheading.

[cluster.myclustername]
class-name="kafka"
servers=[ "localhost:9091", "localhost:9092", "localhost:9093" ]
client-profile="myclient" #profile name defined as in client profile subheading.
topic-refresh=10
offset-refresh=10

Consumers: This heading configures from where to fetch consumer offset information. It must have a unique subheading associated with it.

[consumer.myconsumers]
class-name="kafka"
cluster="myclustername"  # sub heading name defined in above cluster configuration
servers=[ "localhost:9091", "localhost:9092", "localhost:9093" ]
client-profile="myclient"
offsets-topic="__consumer_offsets"
start-latest=true

We are done with configuration. Now Execute the command:

./bin/Burrow --config-dir /path-in-which-config-is-present  

Your configuration file name must be burrow.toml. You can send HTTP requests on port 8080 to fetch information about Kafka clusters. You can find a list of different HTTP endpoints to fetch information about Kafka clusters here. Burrow-dashboard provides a front-end to visualize the cluster state. It sends HTTP requests to the Burrow server to fetch information about cluster state.

kafka cluster Fetch (FTP client)

Opinions expressed by DZone contributors are their own.

Related

  • How To Install CMAK, Apache Kafka, Java 18, and Java 19 [Video Tutorials]
  • Event Mesh: Point-to-Point EDA
  • Kafka Fail-Over Using Quarkus Reactive Messaging
  • Next-Gen Data Pipes With Spark, Kafka and k8s

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: