Performance Tuning in Mule ESB
Performance Tuning in Mule ESB
By introducing a VM connector with the help of thread configuration, you can increase the performance of your processing by more than 65%.
Join the DZone community and get the full member experience.Join For Free
Maintain Application Performance with real-time monitoring and instrumentation for any application. Learn More!
Performance plays a major role in the software lifecycle — and it is not easy! It's highly recommended to follow coding standards while developing your application.
Remember these two major topics while writing code:
NFR (nonfunctional requirements) largely determine your design for the application. Mule applications may experience performance issues when processing large amounts of messages and messages that are large in size.
Some design recommendations for Mule applications:
Try to use flow references over VM endpoints.
Try to use connection pooling for the connectors.
Use Dataweave for transformations.
Try to avoid session variables usage in applications. Session variables are serialized and deserialized, which negatively impacts performance.
In the below sample flow, you can increase the performance.
Read the files from one directory and copy to the other directory using the file connector.
<flow name = "Fileprocess"> <file: inbound - endpoint path = "D:\test\input" responseTimeout = "10000" doc: name = "File"> </file:inbound-endpoint> <logger level = "INFO" doc: name = "Logger" message = "#[message.payload]"/> <file: outbound - endpoint path = "D:\test\output" responseTimeout = "10000" doc: name = "File"/> </flow>
In the output, it took 46 seconds to process 195 files!
Now, let's read the file from the source directory and copy it to the other directory (one by one).
Imagine if you want to read millions of files and required time to process. That would be huge!
We can reduce the processing time by introducing a VM connector with help of thread configuration.
Below is the updated flow:
<vm:connector name="vmtest" numberOfConcurrentTransactedReceivers="25" > <receiver-threading-profile maxThreadsActive="100" maxThreadsIdle="25" /> <dispatcher-threading-profile maxThreadsActive="100" maxThreadsIdle="25" /> </vm:connector> <flow name="Fileprocess"> <file:inbound-endpoint path="D:\test\input" responseTimeout="10000" doc:name="File" > </file:inbound-endpoint> <vm:outbound-endpoint exchange-pattern="one-way" doc:name="VM" path="in" connector-ref="vmtest"> </vm:outbound-endpoint> <logger level="INFO" doc:name="Logger" message="#[message.payload]"/> <file:outbound-endpoint path="D:\test\output" responseTimeout="10000" doc:name="File"/> </flow>
In the output, it took 18 seconds to process 195 files!
We've increased the performance more than 65%.
Both flow outputs are same, but the processing time changes.
Mule connectors support dispatch and receiving profiles.
Dispatch: Threads used to send messages out.
Receiver: Threads used in message source.
Feedback is welcome!
Opinions expressed by DZone contributors are their own.