DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
11 Monitoring and Observability Tools for 2023
Learn more
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Log Aggregation Capabilities and Performance: Part II

Log Aggregation Capabilities and Performance: Part II

In Part II, we look at performance and get a nifty summary!

Asaf Yigal user avatar by
Asaf Yigal
·
Dec. 02, 16 · Opinion
Like (3)
Save
Tweet
Share
5.54K Views

Join the DZone community and get the full member experience.

Join For Free

did you miss part i? check it out here !

performance

when both redis and kafka performances were tested, the results were very interesting.

kafka

kafka’s popular messaging queue system is tested a lot by major companies such as linkedin, which in fact, its engineers actually wrote the first version of kafka. in their tests , linkedin used kafka in cluster mode with six machines, each with an intel xeon 2.5 ghz processor with six cores, 32 gb of ram, and six 7200 rpm sata drives.

producers

for the first test, one topic was created with six partitions and without replication. there were 50 million records generated in a single thread using a single producer. each message size was 100 byte.the peak throughput generated using this setup was a bit over 800k records/sec or 78 mb/sec. in a different test, they used the same base setting with three producers running on three separate machines. in this case, we see that the peak is much higher at around 2,000records/sec or 193.0 mb/sec.

asynchronous vs. synchronous replication

the second batch of tests dealt with the replication method. using the same number of records and message size and a single producer similar to the previous test, there were three replicas. the replication worked in an asynchronous fashion and its throughput peak was around 766k records/sec or 75 mb/sec.

however, when the replication was synchronous – which means that the master waits for acknowledgment from the replicas – the throughput peak was low at around 420k records/sec or 40 mb/sec. though this is a reliable setup since it ensures all messages arrive, it results in significantly lower throughput because the time it takes for the master to acknowledge the reception of the messages.

consumers

in this case, they used the exact same number of messages and size as well as 6 partitions and 3 replicas. they applied the same approach by increasing the number of consumers. in the first test with a single consumer, the highest throughput was 940k records/sec or 89 mb/sec. however, not surprising, when three consumers were used, the throughput reached 2,615k records/sec processed or 249.5 mb/sec.

kafka’s throughput performance is based on the combination of the amount of producers, the consumer, and the replication method. for this purpose, one of the tests included was a single producer, a single consumer and three replicas in an async mode. the peaks achieved in this test were 795k records/sec processed or 75.8 mb/sec.

messages processing throughputs

as shown below, we can expect a decrease in records per second as the record size increases:

record size vs throughput

message size vs. throughput (rec/sec) ( source )

however, as we can see in the graph below, as the record size in bytes grows, so does the throughput. smaller messages create a low throughput. this is due to an overhead of queuing the messages which impacts the machine performance:

message size vs throughput

message size vs. throughput (mb/sec) ( source )

in addition, as we can see in the graph below, the amount of consumed data isn’t impacting the performance of kafka.

throughput vs size

throughput vs. size ( source )

kafka heavily relies on the machine memory (ram). as we see in the previous graph, utilizing the memory and storage is an optimal way to maintain a steady throughput. its performance depends on the data consumption rate. in the case that consumers don’t consume data fast enough, kafka will have to read from a disk and not from memory which will slow down its performance.

redis throughput

let’s examine redis’ performance when it comes to message processing rates. we used the very basic redis commands that will help us measure its performance: set , get , lpush , and lpop . these are common redis commands used to store and retrieve redis values and lists.

in this test, we generated 2m requests. the key length was set between 0 and 999999 with a single value size of 100 bytes. the redis was tested using the redis benchmark command.

redis pipelining

as shown below, in our first test we observed significant differences in performance improvement when using redis pipelining . the reason is that, with a pipeline, we can send multiple requests to the server without waiting for the replies and finally check the replies in a single step.

throughput vs command with or without redis pipeline

throughput vs. command with or without redis pipeline (2.6 ghz intel core i5, 8gb ram)

the data size on redis can vary. as you can see, the graph below shows throughput with different value (message) sizes. in the graphs, it’s easy to see that when the message size is increased, the throughput, in terms of  the number of requests per second, decreases. as shown below, this behavior is consistent with all four commands.

throughput vs different message size

throughput vs. different message size (bytes)

in addition, as shown below, we measured the written data in bytes. we saw that the number of written bytes in redis grew as we increased the number of records, which is somehow intuitive and something identical that we noticed in kafka.

throughput vs value size for get commands

throughput vs. value size for get commands

redis snapshotting supports the redis persistence mode. it produces point-in-time snapshots according to user’s preference, including, for example, the time passed from the last snapshot or number of writes. however, if for example a redis instance restarts or crashes, all data between consecutive snapshots will be lost. redis persistence doesn’t support the durability in such cases and is limited to applications in which recent data is not important.

kafka vs. redis: a summary

as mentioned above, redis is an in-memory store. this means that it uses its primary memory for storage and processing which makes it much faster than the disk-based kafka. the only problem with redis’ in-memory store is that we can’t store large amounts of data for long periods of time.

since the primary in-memory is smaller than a disk, we have to clear it regularly by automatically moving data from in-memory to disks and making room for new data. redis is persistent by allowing us to dump the dataset into a disk if necessary. redis also follows the master-slave architecture, and replication is only useful when persistence is turned on in the master.

in addition, redis doesn’t have the concept of parallelism like kafka does, where multiple processes can consume the data at the same time.

based on both tools’ features and though the above tests for kafka vs. redis are not exactly the same, we still can summarize that when dealing with real-time messages processing with a minimal latency, you should first try redis. however, in case messages are large and data should be reused, you should first consider kafka.

a summary table

table of kafka vs redis differences

the next step: robust log data shipment

as mentioned at the beginning of this post, using tools such as kafka and redis could be a great solution to protect elasticsearch, which is very susceptible to load. but with simple architecture (see the picture below), using fluentd — which is used for shipping data from kafka or redis to elasticsearch — can protect the system from interrupting the data streams. in this case, the data will continue to flow even if your elasticsearch cluster is down. once it is restored, you can reconnect fluentd and elasticsearch and continue to index the messages queued  in kafka or redis.

another relevant scenario in need of a robust log aggregator is when you want to give more strength to your elk stack and scale it up. for example, with multiple kafka queues, you can dedicate as many logstash instances as you want in order to fill elasticsearch with data from the topics. this is useful when dealing with large elasticsearch clusters that need to handle the indexing of large amounts of data.

kafka fluentd integration

integration of kafka and fluentd for shipping logs into elasticsearch

integration with fluentd

kafka

when integrating fluentd with kafka for the purposes of putting in or extracting data from a topic, we can write a custom java application using a kafka consumer api.

we have used the following java application for doing the same: https://github.com/treasure-data/kafka-fluentd-consumer

modification is needed in config/fluentd-consumer.properties with an appropriate configuration. update the fluentd configuration as follows:


redis

  • for integrating redis with fluentd, we can write a plugin in any clients that are supported by the redis (for various programming languages).
  • we have used the following input plugin for fluentd written in ruby: https://github.com/onurbaran/fluent-plugin-redislistener
  • this plugin extracts data from the list data structure and puts it into fluentd.
  • for using this plugin, you have to install a ruby client (we used redis-rb) supported by redis.
  • you can find the redis-rb client here : https://github.com/redis/redis-rb

integrating with logz.io

both kafka and redis can be integrated in the same manner with the logz.io elk stack . all you have to do is use the fluentd logz.io plugin using the following command:


after installing the output plugin, update your fluentd configuration. as you can see in the example below, in one single cluster, you can define as many topics as you want. the final fluentd configuration should look like the example below:


where token value which can be found inside user settings page on logz.io as shown below, and your_match can be *.** to match all events:

logzio user settings

logz.io user settings

a final note

this article explains the benefits of using queues or in-memory stores like kafka and redis for the purpose of log aggregation. these two are beasts in their category, but as described they operate quite differently. redis’ in-memory database is an almost perfect fit for use-cases where short-lived messages and persistence aren’t required. on the other hand, kafka is a high throughput distributed queue which is built for storing large amounts of data for longer periods of time.

kafka Redis (company) Database Data (computing) Throughput (business) Fluentd Testing

Published at DZone with permission of Asaf Yigal, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • DevOps vs Agile: Which Approach Will Win the Battle for Efficiency?
  • How To Handle Secrets in Docker
  • Building the Next-Generation Data Lakehouse: 10X Performance
  • Authenticate With OpenID Connect and Apache APISIX

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: