DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

Related

  • Principles to Handle Thousands of Connections in Java Using Netty
  • Streaming ETL With Apache Flink - Part 1
  • Troubleshooting Problems With Native (Off-Heap) Memory in Java Applications
  • Cutting Big Data Costs: Effective Data Processing With Apache Spark

Trending

  • Understanding Europe's Cyber Resilience Act and What It Means for You
  • Development of Custom Web Applications Within SAP Business Technology Platform
  • Edge Data Platforms, Real-Time Services, and Modern Data Trends
  • Beyond the Prompt: Unmasking Prompt Injections in Large Language Models
  1. DZone
  2. Software Design and Architecture
  3. Integration
  4. Improving the Performance of WSO2 MB by Controlling Buffer Limits

Improving the Performance of WSO2 MB by Controlling Buffer Limits

Get tips for boosting the performance of WSO2 Message Broker, an open-source, lightweight distributed message-brokering server.

Sajith Ekanayaka user avatar by
Sajith Ekanayaka
·
Malith Jayasinghe user avatar by
Malith Jayasinghe
·
Asanka Abeyweera user avatar by
Asanka Abeyweera
·
Ruwan Linton user avatar by
Ruwan Linton
·
Aug. 06, 18 · Tutorial
Like (8)
Save
Tweet
Share
5.45K Views

Join the DZone community and get the full member experience.

Join For Free

Introduction

WSO2 Message Broker (WSO2 MB) is a 100% open-source, lightweight, easy-to-use, distributed message-brokering server. It is a core component within the WSO2 Enterprise Integrator. The underlying messaging framework of the WSO2 Message Broker is powered by Andes, one of the distributed message brokering systems compatible with the leading Advanced Message Queuing Protocol (AMQP).

The flow-control is a technique used in message broker for controlling fast producers from overloading slow consumers in producer-consumer scenarios. There may be several reasons for a fast-producer slow-consumer scenario. For example, the consumer or the message broker can be on a low resource footprint. In such scenarios, the broker can get overloaded at a particular moment due to message accumulation within the broker. This can cause message broker instances to run out of resources, such as memory.

WSO2 Message Broker supports buffer limit-based flow-control. This involves blocking the message acceptance when the buffer usage reaches a high-limit, and unblocking it when it reaches a low-limit. Having a large number as the high-limit would increase the number of messages stored in memory before they are stored in databases. This may result in a higher overall message publishing rate, but with reduced reliability. In this article, we analyze the impact of buffer limits on the performance of message broker under a range of scenarios.

Performance Test Details

In this section, we provide the details of the performance tests we conducted. We configure 10 JMS clients to send messages to a queue in the message broker and 10 JMS clients to receive the messages published to this queue. The following figure shows the deployment diagram.

We conducted all the performance tests on Amazon EC2 on the following instance types for two workload types using 1 KB and 10 KB message sizes.

  • MB node : c4.2xlarge
  • Pubscriber : t2.medium
  • Subscriber: t2.medium

The description of the workload types:

Load type

Message publishing rate (JMeter)

Constant

5000TPS

Burst

50TPS for 30s, 1000TPS for 10s

Performance Results

This section presents the performance results. It is worth noting that there are two main buffer limits in WSO2 MB.

  • Global buffer limits: This is the global buffer limits which enable/disable the flow control globally depending on the aggregated buffer usage
  • Local buffer: This is the channel specific buffer limits which enable/disable the flow control locally depending on the local buffer usage.

Each of the above buffer limits has a low-limit and a high-limit. As pointed out, the message broker requests the client to stop sending messages when the number of messages in the buffer reaches a high-limit and requests the client to start sending the messages when the number of messages in the buffer reaches a low-limit.

Local buffer limits are local to individual AMQP channels (or JMS sessions) while the global buffer limits are the global limits for aggregated buffer usage from all active channels. When the number of channels increases, the global high-limit will have a higher probability to hit. On the other hand, when the numbers of channels are small, the local high-limit will have a higher probability to hit. The Message Broker enables flow control when one of the upper limits is reached.

The following figures show the impact of these buffer sizes on the publishing and consuming rates.

The x-axes of the above plots represent the local buffer (low limit - high limit) and global buffer (low limit - high limit) respectively.

In the above plots, we compute the publishing rates by dividing the total numbers of messages published by the duration of the test. The publishing rate we compute here can be considered as the effective publishing rate and it may not be equal to the publishing rate we specify in the JMeter on the publisher side. The effective publishing rate depends on the number of factors such as the rate at which the broker can process messages and the consumers consume messages.

Let us now try to understand the behavior. We notice there is an improvement in the publishing and consuming rates with the increasing buffer sizes. The reason is that increased buffer size results in a reduction in the amount of flow-control. When there is minimal flow-control the publisher can publish messages with no interruptions leading to higher publishing and consuming rates. We also notice that as we increase the buffer size, we get better results for 1KB message size compared to the 10KB message size. This is particularly the case for burst loads.

It is worth pointing out that as we increase the buffer size, there is a possibility of message broker going out-of-memory (OOM) particularly under high publishing rates and large message sizes. The way to deal with this is to compute the buffer size based on the available memory (i.e. heap size) and the largest message size. This will ensure that the message broker will not go out-of-memory even under peak load. If you need to further increase the buffer, then you can increase the heap memory prior to increasing the buffer size. After increasing the heap memory it is important to check if the GC behavior is normal. The table below shows the GC throughput of message broker when the heap size is 8GB and message size = 10KB.

Local buffer limits

Burst/Constant

GC Throughput

100_1000

burst

99.72

200_2000

burst

99.85

300_3000

burst

99.67

400_4000

burst

99.68




100_1000

constant

99.74

200_2000

constant

99.74

300_3000

constant

99.73

400_4000

constant

99.68


We note from the above results that GC throughput is close to 100% for all the cases. This means that GC has minimal impact on the message broker’s performance.

Conclusion

In this article, we discussed how the flow-control works in WSO2 MB. WSO2 Message Broker facilitates buffer limit-based flow-control. We presented the performance results under different buffer limits under different workload types. We note that for certain cases we can get performance improvements by increasing the buffer size. However, when we increase the buffer size, we need to ensure increasing the memory does not cause OOM in the Message Broker.

Buffer (application) Message broker Flow control (data) garbage collection

Opinions expressed by DZone contributors are their own.

Related

  • Principles to Handle Thousands of Connections in Java Using Netty
  • Streaming ETL With Apache Flink - Part 1
  • Troubleshooting Problems With Native (Off-Heap) Memory in Java Applications
  • Cutting Big Data Costs: Effective Data Processing With Apache Spark

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: