Message Throttling with Mule ESB
Join the DZone community and get the full member experience.
Join For Free
while implementing the
mule esb
i ran into the following scenario:
messages were picked up from a queue, the message was transformed, offered to a web service, the response was transformed and put on a response queue.
not an unusual use case in the world of integration. one thing that made this one special was the (lack of) performance of the web service. it appeared as soon as we started to send messages to the services via the esb the response time for each of the messages increased significally. investigation showed that the web service was having serious issues when concurrent messages were sent. this is what the mule esb do by default. it will start of with a set of threads to process the messages as quickly as possible. until the issue with the web service is solved i decided to add throttling functionality to my flow so i could manage the number of calls to the web service.
anyway to add the throttling i think there are two ways: one is by adding some delay before a message is delivered. this way is described
here
. but although i haven’t tried this one i expect it would lead to the situation where 16 concurrent web service calls are delayed 10 seconds and then still fired all at once.
so what i was looking for was to be able to configure the number of concurrent threads that would call the web service. luckily this has been greatly simplified in mule3 vs mule2 as you can read
here
. to mimic my situation i created the following mule flow:
<?xml version="1.0" encoding="utf-8"?> <mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:vm="http://www.mulesoft.org/schema/mule/vm" xmlns:test="http://www.mulesoft.org/schema/mule/test" version="ee-3.4.1" xmlns:xsi="http://www.w3.org/2001/xmlschema-instance" xsi:schemalocation="http://www.mulesoft.org/schema/mule/vm http://www.mulesoft.org/schema/mule/vm/current/mule-vm.xsd http://www.mulesoft.org/schema/mule/test http://www.mulesoft.org/schema/mule/test/current/mule-test.xsd http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd"> <vm:endpoint name="input" path="input" exchange-pattern="one-way" /> <vm:endpoint name="ws-call" path="ws-call" exchange-pattern="request-response" /> <vm:endpoint name="output" path="output" exchange-pattern="one-way" /> <flow name="testflow" processingstrategy="asynchronous"> <inbound-endpoint ref="input" /> <outbound-endpoint ref="ws-call" /> <outbound-endpoint ref="output" /> </flow> <flow name="wsflow" processingstrategy="synchronous"> <inbound-endpoint ref="ws-call" /> <append-string-transformer message=" added to the payload" /> <test:component waittime="2000"/> </flow> </mule>
as you can see i have done nothing special related to threads, processing etc. the test class to run this flow looks like this:
package net.pascalalma.mule; import org.junit.test; import org.mule.defaultmulemessage; import org.mule.api.mulemessage; import org.mule.module.client.muleclient; import org.mule.tck.junit4.functionaltestcase; import java.util.date; public class throttletest extends functionaltestcase { @test public void simplepassthrough() throws exception { muleclient client = new muleclient(mulecontext); long start = new date().gettime(); for (int i=0;i<30;i++) { mulemessage inmsg = new defaultmulemessage("message "+ i,mulecontext); client.dispatch("input", inmsg); } mulemessage result = client.request("output",3000); while (result != null) { result = client.request("output",3000); long end = new date().gettime(); system.out.println("message took : " + (end - start)/1000); } long end = new date().gettime(); system.out.println("total service took : " + (end - start)/1000); } @override protected string getconfigresources() { return "mule-config.xml"; } }
this generates the following output:
================================================================================ = testing: simplepassthrough = ================================================================================ message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 3 message took : 5 message took : 5 message took : 5 message took : 5 message took : 5 message took : 5 message took : 5 message took : 7 message took : 7 message took : 7 message took : 7 message took : 7 message took : 7 total service took : 10
as you can see a lot of messages are processed concurrently in a ‘batch’.
to have control about the number of threads you can do two things (at least). the first is to make sure there are only a certain number of threads for the flow allowed. this can only be done if the processing strategy of the flow is asynchronous. in that case you can define your own processing strategy like this:
... <queued-asynchronous-processing-strategy name="allow2threads" maxthreads="2" poolexhaustedaction="run"/> <flow name="testflow" processingstrategy="allow2threads"> ...
the rest of the config is the same as before. if we run this we get the following output (for 10 messages):
================================================================================ = testing: simplepassthrough = ================================================================================ message took : 3 message took : 3 message took : 5 message took : 5 message took : 7 message took : 9 message took : 9 message took : 11 message took : 11 message took : 13 total service took : 16 process finished with exit code 0
as we can see there are now max 2 concurrent messages.
in case we have to have a synchronous flow (when using transactions for instance) we cannot set maxthreads on the processing strategy. but we can set it on the connector as i show here:
... <vm:connector name="myconnector"> <receiver-threading-profile dothreading="true" maxthreadsactive="2" poolexhaustedaction="run" /> </vm:connector> <flow name="testflow" processingstrategy="synchronous"> ...
if we run this (for 10 messages) we get the following output:
================================================================================ = testing: simplepassthrough = ================================================================================ message took : 2 message took : 2 message took : 4 message took : 4 message took : 6 message took : 6 message took : 9 message took : 9 message took : 11 message took : 11 total service took : 14we get a similar result as the previous case but now the whole flow is processed synchronously.
with this test case you are able to test it your self to see what best fits your situation. you can also play with the ‘poolexhaustedaction’ and other attributes.
Published at DZone with permission of $$anonymous$$. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
10 Traits That Separate the Best Devs From the Crowd
-
Integrate Cucumber in Playwright With Java
-
Top Six React Development Tools
-
Constructing Real-Time Analytics: Fundamental Components and Architectural Framework — Part 2
Comments