Over a million developers have joined DZone.
Platinum Partner

ESB Performance Pitfalls

· Performance Zone

The Performance Zone is brought to you in partnership with New Relic. New Relic APM provides constant monitoring of your apps so you don't have to.

An Enterprise Service Bus (ESB) is at the heart of many Service Oriented Architecture (SOA) solutions, a technology that is being widely adopted nowadays. Among plenty of various ESB offerings, how do you choose the right one, the one that best suites your needs? Let's consider various attributes that might be of interest for you: ease of development of services, ease of deployment, features for message manipulation, transactional processing, message persistence, supported endpoints, memory consumption, license/support cost, available sources, documentation, and performance.

How can I measure performance?

Performance seems to me to be a frequently discussed attribute of available ESBs. Vendors are always creating scenarios that make their implementation look the best, and new comparative studies are published etc. There is no mystery about that as companies want to know what hardware is required for their applications. But how do we measure the ESB performance for real? It is not that easy to answer this question. For relatively short time we have SOA Manifesto that standardizes the definition of what SOA is. Now we want to also define standardized means to measure its performance.

In the past, there have been multiple attempts at defining SOA benchmarks. In 2007 there was a project to measure the most widely used ESBs. WSO2 performed ESB Performance Testing Round 1, Round 2 and Round 3. But, it is sort of outdated now and limited to only Web Service scenarios.

Recently, there appeared a revitalization of these tests this time promoted by AndroitLogic and adapted to their ESB. But, this has not become a wide adopted ESB performance measurement technique. Some of the possible reasons for this are discussed later in this article.

At the end of 2009, Standard Performance Evaluation Corporation (SPEC) entered the scene by founding a new SOA Subcommittee. This Subcommittee took over the initiative that had previously been driven mainly by IBM. The subcommittee's goal is to develop a new industry standard benchmark for measuring performance for typical middleware, database and hardware deployments of applications based on the SOA. From their early findings, they are fully aware of the risk of not having a standardized SOA benchmark. The risks are mainly:

  1. Promoting Web Service specific benchmarks as general SOA benchmarks,
  2. creating vendor dependent benchmarks,
  3. failing to create a SOA benchmark, which hinders SOA wide adoption.

One of basic requirements for this performance benchmark is that multiple vendors must agree on it. The benchmark will consist of three parts. The first part is called Services and will be composed of several Web Services handling some automatic tasks. The second part is called Choreography and will contain some business processes (both fully automated and with human tasks). The last part is called Integration and will be assembled from core ESB features including service virtualization, and message routing, transformation and modification.

Common pitfalls of current benchmarks

Let's discuss common drawbacks that existing SOA benchmarks suffer from. The list is not comprehensive at all but touches most visible issues that might prevent the wide adoption of the up to date benchmark scenarios.

Web Services

Web Services are definitely important for a SOA solution. But bear in mind that Web Services is not equal to SOA, even though they are often related. Many of today's SOA solutions do not primarily use Web Services. They use transports like messaging (JMS), FTP, files, and databases. Also, many of today's Web Service applications even are not SOA solutions. They are just simple remote procedure calls. A SOA performance benchmark should not stick only to Web Services.


Web Services often use SOAP messages that are XML files of a specific format. The XML manipulation is an Achilles heel of Web Services' performance. First you need to convert your data to XML on the client side. An ESB parses this XML, performs an operation on it, and creates response XML that the client must parse again. Parsing XML documents is an expensive operation.

It is important to compare how ESBs deal with SOAP messages to show that the ESB is not the bottleneck. I tried to use jProfiler on some ESBs and it showed up that creating and parsing XML is the real performance blocker. Now, my question is, what part of communication in your ESB may be other than SOAP?

Fortunately, there is a possibility to use REST (Representational State Transfer) as a communication protocol with Web Services. However, you need to define your own message format.


In many applications transactional processing is of major importance. Either all operations must succeed or none. All operations must be executed exactly once. This is what you would expect when transferring money or closing a car insurance deal on-line. Obviously, there is an overhead needed to accomplish those natural expectations.

Accordingly, it is important to see how transactions influence performance by measuring the time it takes for several resources in separated processes to participate in a transaction driven by a shared transaction manager.


In addition to transactions, security is an major aspect of a production environment. But how much does it slow things down? This should be measured for various transports because of different security implementations. Both authentication and authorization should be tested as well.

Currently, there is a scenario suggested which is about securing an unsecured service using an ESB. This is useful in situations when you want to make your internal service publicly available. You might not request authentication behind a company firewall but you need it for the outside world. What about the opposite scenario where an unsecured gateway is created for a secured service?

Concurrent Clients

While showing great performance for a single client, an ESB may be totally unusable for many concurrent clients. For stable and fair results, an ESB must be isolated from clients as much as possible. Clients must be on their own machine(s), the ESB should have its own dedicated server, and any possible proxied service (see the Virtualization scenario) must have its own machine as well. This configuration corresponds to a typical production environment.

Having clients, the ESB server, and the proxied service on a single machine does show significant fluctuations in results. It is logical - how many cores does your server have? Let's count. We want to test with 1,000 concurrent clients. The ESB should create 1,000 threads to serve the clients fast. And the proxied service should have the same number of threads to serve the ESB's requests. This requires a total of 3,000 threads. Switching a thread context is not a cheap operation. Even if the ESB and the proxied service used 200 threads each, we would still end up with 1,400 threads. It is a fight for resources, not a serious performance measurement since you are far behind the server's saturation point.


Any ESB vendor can create a scenario with some conditions that beats all other ESBs. I know that because I can fine tune JBoss ESB to be better in the performance with the existing scenarios. But it does not prove much until there is a valid set of standard scenarios for performance measurement. Even if there was such a set, developers would have to take other aspects into account as well. Great performance numbers are useless if an ESB can only run some simple scenarios, or if there is not a wide range of supported endpoints. Stability and absence of memory leaks is also important for you server not to crash every other day.

I am dealing with the ESB performance for almost two years now. And there are still more questions than answers. I would like to collect all requirements for such an objective performance measurement scenarios for ESBs. I hope I can hear your opinions here because I do not know answers to my questions yet. If there are some scenarios widely used in prodcution environment, then they are definitely good for performance testing. I plan to lately publish my findings here based on your input.

Do you run performance tests of your SOA solutions? What scenarios do you use? Do you believe a standardized benchmark is a must? What is the right size of a test message? How many concurrent clients should be used?


Special thanks to Jiri Pechanec and Len DiMaggio for their help.

The Performance Zone is brought to you in partnership with New Relic. New Relic’s SaaS-based Application Performance Monitoring helps you build, deploy, and maintain great web software.


{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}