Monitoring and Managing Apache Kafka Clusters
Monitoring and managing Kafka has been a difficult and command-line oriented process for too long.
Join the DZone community and get the full member experience.Join For Free
With the new open source Streams Messaging Manager, you now have deep visibility into active Kafka flows across many producers, consumers, brokers, and topics.
Key Analysis: The true value of this tool is the ability to follow any message as it transits through multiple systems, clouds and hops. It is very easy to lose track of a consumer or producer when you have dozens or more topics and thousands of messages a second. You add in hybrid cloud and other connected systems and frameworks like Spark, Flink, NiFi, SAM, and Hadoop.
After using Kafka for a number of years, I have loved its simplicity and bulletproof nature. The issue I have always had is navigating a series of logs, CLI tools, and custom consumer logging trying to figure out when messages were late, lost, or duplicated. It's been a bit tricky making sure you have partitions set up correctly and enough replicas for your topics.
A few things that I found useful for working with distributed applications using Apache Kafka are:
Knowing the number of producers, number of brokers, number of topics, and number of consumers for my entire Apache Kafka cluster.
What is the throughput in my brokers, do I need more brokers, more RAM, more disk, more CPU?
Are any partitions skewed?
Are messages not been consumed for some reason? Let's find out.
SMM has a very UX driven, fluid, functional interface that lets us analyze at a glance and dive deep into any area of interest. As you can see by the screen capture below, it's very easy to look at your producers, brokers, topics, and consumer groups across various metrics including data in, data out, messages in, and active/passive consumer groups. To make this data even more valuable than just having a killer GUI, there is a full REST API to integrate with your DevOps tools.
As you can see, the REST API is very complete and well-documented with Swagger for ease of creating clients. You could ingest this data with most DevOps tools, Python, or even Apache NiFi.
What is also very nice about this new tool is that it builds on the existing new releases of Apache Kafka including 1.1.1 (part of the new HDF 3.2). You can now trace ingest from Apache NiFi to Apache Kafka to Apache Hive with the help of Apache Atlas, Apache Ranger, and SMM. The integration and clean combination of these tools and the ability with DataPlane to now manage beyond a single cloud, cluster, or tool is game-changing. I can now find my messages on Amazon, Google Cloud, Azure, and on-premise with one platform, one tool, one company and all in open source.
A whitepaper is available in PDF format for download for additional details.
Github Repo — APP
Github Repo — SERVER
Official Release Blog
Opinions expressed by DZone contributors are their own.