Over a million developers have joined DZone.

Log Aggregation Capabilities and Performance: Part I

Here is a very in-depth comparison of Log Aggregation comparing using a parallel disk-based message queue like Kafka vs an in-memory data store like Redis - performance metrics included.

· Big Data Zone

Learn how you can maximize big data in the cloud with Apache Hadoop. Download this eBook now. Brought to you in partnership with Hortonworks.

Image title

Today, it’s no question that we generate more logs than we ever have before. However, due to the large amount data that is constantly analyzing and resolving various issues, the process is becoming less and less straightforward.

Essentially, log management helps to integrate all logs for analysis. An important preliminary phase is log aggregation, which is the act of collecting events logs from different systems and data sources. It includes the flow and tools necessary to gather all data into one single secure data repository. The log repository is then analyzed to generate and present the metrics and insights needed for the operations team.

Today, the most popular tools for log aggregation are Kafka and Redis. Both tools provide the functionality of data streaming and aggregation in their own respective ways. In this post, we are going to compare the two in regards to their various capabilities and performance tests.

Capabilities

About Kafka

Kafka is a distributed, partitioned and replicated commit log service that provides a messaging functionality as well as a unique design. We can use this functionality for the log aggregation process.

The basic messaging terms that Kafka uses are:

  • Topic: These are the categories in which messages are published.
  • Producer: This is the process of publishing messages into Kafka’s topics.
  • Consumer: This process subscribes to topics and processes the messages. Consumers are part of a consumer group which is composed of many consumer instances for scalability and fault tolerance.
  • Broker: Each server in a Kafka cluster is called a broker.

The logs fetched from different sources can be fed into the various Kafka topics through several producer processes, which then get consumed by the consumer.

Kafka provides various ways to push data into the topics:

  • From command line client: Kafka has a command line client for taking input from a particular file or standard input and pushing them as messages into the Kafka cluster.
  • Using Kafka Connect: Kafka provides a tool that implements custom logic using connectors to import/export data to the cluster.
  • By writing custom integration code: The final way is to write the code for integrating data sources with Kafka using the Java producer API.
  • In Kafka, each topic has log data partitions that are managed by the server:

    kafka log data partitions

    (Source: Kafka documentation)

    Kafka distributes the partitioned logs among several servers in a distributed system. Each partition is replicated across a number of servers for fault tolerance. Due to this partitioned system, Kafka provides parallelism in processing. More than one consumer from a consumer group can retrieve data simultaneously, in the same order that messages are stored.

    In addition, Kafka allows the use of as many servers as needed. It uses a disk for its storage therefore might slow to load. However, due to the disk storage capacity, it can store a large amount of data (i.e., in terabytes) for a longer retention period.

    About Redis

    Redis is a bit different from Kafka in terms of its storage and various functionalities. At its core, Redis is an in-memory data store that can be used as a high-performance database, a cache, and a message broker. It is perfect for real-time data processing.

    The various data structures supported by Redis are strings, hashes, lists, sets, and sorted sets. Redis also has various clients written in several languages which can be used to write custom programs for the insertion and retrieval of data. This is an advantage over Kafka since Kafka only has a Java client. The main similarity between the two is that they both provide a messaging service. But for the purpose of log aggregation, we can use Redis’ various data structures to do it more efficiently.

    In a previous post, we described in detail how Redis can support a production ELK (Elasticsearch, Logstash, Kibana) Stack. In this example, we explained how Redis can serve as an entry point for all logs. It’s used as a messaging service and buffers all data.  Only when the Logstash and the Elasticsearch have the resources required does Redis release the aggregated log data, ensuring no data leaks due to lack of resources.

    Stay tuned for Part II coming soon to a DZone near you!

    Hortonworks DataFlow is an integrated platform that makes data ingestion fast, easy, and secure. Download the white paper now.  Brought to you in partnership with Hortonworks

    Topics:
    kafka ,redis ,log aggregation ,big data

    Published at DZone with permission of Asaf Yigal, DZone MVB. See the original article here.

    Opinions expressed by DZone contributors are their own.

    The best of DZone straight to your inbox.

    SEE AN EXAMPLE
    Please provide a valid email address.

    Thanks for subscribing!

    Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
    Subscribe

    {{ parent.title || parent.header.title}}

    {{ parent.tldr }}

    {{ parent.urlSource.name }}