Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Improving the Performance of WSO2 MB by Controlling Buffer Limits

DZone's Guide to

Improving the Performance of WSO2 MB by Controlling Buffer Limits

Get tips for boosting the performance of WSO2 Message Broker, an open-source, lightweight distributed message-brokering server.

Free Resource

Sensu is an open source monitoring event pipeline. Try it today.

Introduction

WSO2 Message Broker (WSO2 MB) is a 100% open-source, lightweight, easy-to-use, distributed message-brokering server. It is a core component within the WSO2 Enterprise Integrator. The underlying messaging framework of the WSO2 Message Broker is powered by Andes, one of the distributed message brokering systems compatible with the leading Advanced Message Queuing Protocol (AMQP).

The flow-control is a technique used in message broker for controlling fast producers from overloading slow consumers in producer-consumer scenarios. There may be several reasons for a fast-producer slow-consumer scenario. For example, the consumer or the message broker can be on a low resource footprint. In such scenarios, the broker can get overloaded at a particular moment due to message accumulation within the broker. This can cause message broker instances to run out of resources, such as memory.

WSO2 Message Broker supports buffer limit-based flow-control. This involves blocking the message acceptance when the buffer usage reaches a high-limit, and unblocking it when it reaches a low-limit. Having a large number as the high-limit would increase the number of messages stored in memory before they are stored in databases. This may result in a higher overall message publishing rate, but with reduced reliability. In this article, we analyze the impact of buffer limits on the performance of message broker under a range of scenarios.

Performance Test Details

In this section, we provide the details of the performance tests we conducted. We configure 10 JMS clients to send messages to a queue in the message broker and 10 JMS clients to receive the messages published to this queue. The following figure shows the deployment diagram.

We conducted all the performance tests on Amazon EC2 on the following instance types for two workload types using 1 KB and 10 KB message sizes.

  • MB node : c4.2xlarge
  • Pubscriber : t2.medium
  • Subscriber: t2.medium

The description of the workload types:

Load type

Message publishing rate (JMeter)

Constant

5000TPS

Burst

50TPS for 30s, 1000TPS for 10s

Performance Results

This section presents the performance results. It is worth noting that there are two main buffer limits in WSO2 MB.

  • Global buffer limits: This is the global buffer limits which enable/disable the flow control globally depending on the aggregated buffer usage
  • Local buffer: This is the channel specific buffer limits which enable/disable the flow control locally depending on the local buffer usage.

Each of the above buffer limits has a low-limit and a high-limit. As pointed out, the message broker requests the client to stop sending messages when the number of messages in the buffer reaches a high-limit and requests the client to start sending the messages when the number of messages in the buffer reaches a low-limit.

Local buffer limits are local to individual AMQP channels (or JMS sessions) while the global buffer limits are the global limits for aggregated buffer usage from all active channels. When the number of channels increases, the global high-limit will have a higher probability to hit. On the other hand, when the numbers of channels are small, the local high-limit will have a higher probability to hit. The Message Broker enables flow control when one of the upper limits is reached.

The following figures show the impact of these buffer sizes on the publishing and consuming rates.

The x-axes of the above plots represent the local buffer (low limit - high limit) and global buffer (low limit - high limit) respectively.

In the above plots, we compute the publishing rates by dividing the total numbers of messages published by the duration of the test. The publishing rate we compute here can be considered as the effective publishing rate and it may not be equal to the publishing rate we specify in the JMeter on the publisher side. The effective publishing rate depends on the number of factors such as the rate at which the broker can process messages and the consumers consume messages.

Let us now try to understand the behavior. We notice there is an improvement in the publishing and consuming rates with the increasing buffer sizes. The reason is that increased buffer size results in a reduction in the amount of flow-control. When there is minimal flow-control the publisher can publish messages with no interruptions leading to higher publishing and consuming rates. We also notice that as we increase the buffer size, we get better results for 1KB message size compared to the 10KB message size. This is particularly the case for burst loads.

It is worth pointing out that as we increase the buffer size, there is a possibility of message broker going out-of-memory (OOM) particularly under high publishing rates and large message sizes. The way to deal with this is to compute the buffer size based on the available memory (i.e. heap size) and the largest message size. This will ensure that the message broker will not go out-of-memory even under peak load. If you need to further increase the buffer, then you can increase the heap memory prior to increasing the buffer size. After increasing the heap memory it is important to check if the GC behavior is normal. The table below shows the GC throughput of message broker when the heap size is 8GB and message size = 10KB.

Local buffer limits

Burst/Constant

GC Throughput

100_1000

burst

99.72

200_2000

burst

99.85

300_3000

burst

99.67

400_4000

burst

99.68




100_1000

constant

99.74

200_2000

constant

99.74

300_3000

constant

99.73

400_4000

constant

99.68


We note from the above results that GC throughput is close to 100% for all the cases. This means that GC has minimal impact on the message broker’s performance.

Conclusion

In this article, we discussed how the flow-control works in WSO2 MB. WSO2 Message Broker facilitates buffer limit-based flow-control. We presented the performance results under different buffer limits under different workload types. We note that for certain cases we can get performance improvements by increasing the buffer size. However, when we increase the buffer size, we need to ensure increasing the memory does not cause OOM in the Message Broker.

Sensu: workflow automation for monitoring. Learn more—download the whitepaper.

Topics:
wso2 ,message brokering ,wso2 mb ,performance ,throughput ,gc ,gc logs ,tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}