Performance of SOAP/HTTP vs. SOAP/JMS

DZone 's Guide to

Performance of SOAP/HTTP vs. SOAP/JMS

· Performance Zone ·
Free Resource

Today SOA is the most prevalent enterprise architecture style. In most cases services (S in SOA) are realized using web services specification(s). Web services are, in most cases, implemented using HTTP as transport protocol but other options exists. As new architecture style emerges, such as EDA, more message friendly transport protocols pops out. In Java environments most used is JMS. In spite the fact that SOAP/JMS specification is still in draft, JMS is supported in all major (Java) WS stacks. IBM supports SOAP/JMS bindings in their implementation of JAX-RPC framework and recently in JAX-WS for WebSphere Application Server (WAS) 7. Reason for choosing JMS in most cases is reliability. But there are other things that come in mind whether to choose JMS or HTTP.

Reasons to go with HTTP:

  • Firewall friendly (web services exposed over internet)
  • Supported on all platforms (easiest connectivity in b2b scenario)
  • Clients can be simple and lightweight

Reasons to go with JMS:

  • Assured delivery and/or only once delivery
  • Asynchronous support
  • Publish/subscribe
  • Queuing if better for achieving larger scalability and reliability
  • Better handles temporary high load
  • Large volume of messages (EDA)
  • Better support in middleware software
  • Transaction boundary

In SOA architecture best practice is to use JMS internally (for clients/providers that can easily connect to ESB) and HTTP for connecting to outside partners (over internet).

Performance report

It will be interesting to compare performance of SOAP/HTTP and SOAP/JMS services. A few documents on this subject can be found. One of the documents, titled “EFFICIENCY OF SOAP VERSUS JMS” can be found on the link http://www.unf.edu/~ree/1024IC.pdf. This paper is research paper and compares performance of SOAP/HTTP to JMS system (not SOAP/JMS). It will be interesting to see how SOAP/HTTP compares to SOAP/JMS using same framework. 

Test setup

I have created a simple “Hello world” web service using JAX-RPC. WSDL has SOAP/HTTP and SOAP/JMS binding and is deployed to WebSphere Application Server v6.1 Express (WAS). Server was installed in most basic installation (without HTTP server and DB2). WAS default messaging provider (also called SIBus) was used for JMS. As implementation is the same for both bindings we can measure communication overhead and compare protocols. I have set up only one machine so we can’t compare scalability. That can be read in paper mentioned above. What we can do, we can test protocols with various concurrent requests and different message sizes. As test tool JMeter is used (http://jakarta.apache.org/jmeter/). JMeter is well known load and performance tool. Test was performed on laptop Lenovo R61 with 3 GB of RAM. Server was installed in virtual machine and connected with 1GB network with host (private network with host). At all scenarios processer wasn’t used up to 100% percent so speed of network was limiting factor. 

Test configuration

To test protocols with different message sizes there are 4 different SOAP messages. Each “hello world” message is simple message with x times hello like: <q0:HelloWorldRequest><in>HelloHelloHelloHelloHello…</in></q0:HelloWorldRequest>. Message sizes are in range from half of kilobyte to 102 kilobytes.  The same message was send using HTTP and JMS 3 times. Response time was measured alternating number of concurrent requests. Results of the test can be seen in table below.

 Test results

We can look the same results using graph representation. Bars are colored according to message size and on vertical axis is average response. Every test was performed with loop count set to 100 (except the 102KB messages, tests were stopped when average time was stable).

1.    Single request

Single request

When conducting test with only one request we can see some strange results. Differences between HTTP and JMS are not big, in all scenarios but first. I believe that first request was taking 196 ms because JMeter needed to create InitialContext and fetch queue connection factory and queues from JNDI. However same remark doesn’t fit other message sizes. JMeter probably had resources in its internal cache.

2.    30 concurrent requests

30 concurrent requests

 Testing performance with 30 conurrent users we can’t see big differences except with really small messages where HTTP is faster and messages with size of 3,5 KB where JMS was faster. It looks like penalty for creating JMS connection is higher than penalty for creating new HTTP request. Strange results are with 3,5 KB messages. It looks like JMS likes messages with that size more than HTTP.

3.    100 concurrent requests

100 concurrent requests

I didn’t put load over 100 concurent requests because it was too much for build in HTTP server in WAS. If I did, after short amount of time, I was getting errors from JMeter HTTP client. Again we can see that JMS is little slower then HTTP with all messages except 3,5 KB messages when is actually faster.

4. All together

Results all together

 Putting all data on one graph we can see that there are no big differences in speed between HTTP and JMS. Choosing one or other should be decided based on non-functional requirements other then performance. 


Miroslav Rešetar



Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}