IBM IIB (Integration Bus) Best Practices
Explore IBM IIB best practices.
Join the DZone community and get the full member experience.
Join For FreeWe can segregate the best practices to be followed into two major categories.
- Designer perspective
- Developer perspective
Designer Perspective
- First things first, a designer should have a firm understanding of Functional Specification. To some degree, the Functional Specification may undergo a few minor amendments while the task of Technical Specification is underway. However, it is expected that the Functional Specification has defined all the messages, components, and processes that are required for the interface before the commencement of Technical Design.
- Now for flow designing, all the organizations have re-usable common flows (sub-flows) as libraries for auditing and exception handling. But I love to identify other common processes across the projects and have independent common flows prepared for that. We can call this a microservice architecture.
Example 1
Let’s say you have few projects (or the same projects across multiple countries) where the final backend is database and your IIB flows have to make a bunch of SQL inserts.
For this, we can have a flow created, which accepts “SQL inserts” as input from the main flow, and we can do a bulk insert at once. This way, we can decrease the roll-out period of similar pattern projects.
Example 2
I worked on a project where we used Salesforce as a CRM and had many services that needed to connect with it. Each time there is a call to salesforce for fetching sessionID, make another call for the actual business transaction.
Here we have a scope to make the sessionID call webservice an independent common flow.
- There are new cool features in IIB 10 with better performance and ease of development. Such as:
Shared libraries Callable flows RestAPI project (generates API from Swagger doc), webUI for Transaction monitoring, and MQTT nodes for PUB SUB even without local MQ From IIB 10.0.0.14. It is possible to use aggregation even without using MQ. Having few standard patterns across the organization will lead to ease of reusability of code/modules.
Developer Perspective
Let me start with the CPU and memory costing tasks.
- Message parsing
- Navigating a tree in code
- Copying the tree from every node to the next node
- Logic in code
- Resource access ( DB/files/HTTP req etc..)
- Reduce memory and CPU utilization by using OnDemand/opaque parsing. Try reducing parsing as much as possible.
Persistent messages follow a 3-commit rule for completion of “unit of work”, thus taking more processing time. For un-important messages like “balance inquiry”, which has an expiry time, there is no need to use persistence.
- ESQL reference variables should be used for navigating message tree.
- Don’t have too many compute nodes in a flow. This helps in reducing message trees being copied.
- Avoid the below functions in ESQL code for better performance:
EVAL
Cardinality — Try to accomplish the task of looping with “Lastmove” instead
Instead of too many “IF ELSE”, it's good to use “CASE”
- Have transaction maintained in each and every flow
- As I said above, accessing a resource like DB/files, etc. can cost us time and CPU. Have a well-designed flow reduces the access by number of times
- Have a back-out queue configured for all input queues. If not, all failed messages may go to dead-letter-queue, and it will be difficult to identify each and every service’s message for reprocessing
- Global cache helps in sharing frequently needed data across all integration servers
- Have a proper and detailed plan of AVP verification after the restart of integration Nodes
Opinions expressed by DZone contributors are their own.
Comments