Two-Phase Memory Management/Scalability Solution for Stateful Business Processes

DZone 's Guide to

Two-Phase Memory Management/Scalability Solution for Stateful Business Processes

· Integration Zone ·
Free Resource


The popularity of web services and large scale adoption of SOA by industry has led to increasing use of business processes for integration/orchestration. Such processes in the context of Work-flows, B2B, B2C or EAI involve complex, stateful interactions that can span organizational and functional boundaries. A slow or non responsive interaction can cause the instances to pile up and such runaway process can potentially max out the available heap space. This can have adverse impact on performance and can have very debilitating effect on system and may lead to out of memory and system crashes.

Scalability Challenges

Is your business process running out of memory/not scaling well? Memory requirements depend upon concurrent sessions, process variables, complexity of flow, timers, and so on. This coupled with fact that for loosely coupled stateful long running interaction with business partners, processes cannot make any assumptions regarding responses time from interacting partners. So, how do you determine memory resources for your production loads in advance? How do you scale when you are limited by your memory resources? How do you design a scalability solution for processes that does not require many configurations and is self-managing? If you are confronted with these questions, you are not alone. While working on BPEL Service engine part of open source project Open-ESB (https://open-esb.dev.java.net/), managed by Sun Microsystems (now Oracle), we set out to provide built-in support as best-effort approach to help our customers to overcome some of these challenges.

Scaling Business Processes

Various strategies can be employed for scaling business processes. Traditionally, these are broadly categorized into horizontal and vertical scalability. Horizontal scalability is achieved by adding more hardware nodes to the system, whereas vertical scalability essentially means adding resources, such as memory or processing power to a single system. Each of these solutions poses specific overhead and implementation complexities.

The design options for horizontal scaling also referred to as clustering or high availability can range from traditional approach of using of intelligent message router, to that of instance routing between nodes to handle stateful interactions spanning multiple message exchanges (no routing of messages).

While vertical scalability is often employed for scaling synchronous processes; for large systems using asynchronous communications, this solution is far from optimal.

This article will present the self managing optimal memory management solution used by BPEL Service Engine. It is designed with specific domain knowledge by employing multi-phase passivation techniques that does not require any reconfigurations from user. It incorporates our experiences while working on solution for improving scalability of BPEL Engine (based on JBI Specification) for project Open-ESB. Finally, I will also present results of our test simulations for realized benefits of above solutions. The article will also touch upon Horizontal Scalability solution used by BPEL Service engine.


Terms performance and scalability is often used interchangeably but is distinct. I am quoting the definition of performance, scalability and capacity from book: Pro Java EE 5: Performance Management and Optimization book by Steven Haines.

Performance: speed at which a single request can be executed. Scalability: ability of the request to maintain performance under increasing loads the performance of the system under increasing load.

The book gives some example of service engine to explain the above. If the performance of the search engine is to produce valid response for the request in say 1 sec, the scalability would be defined as the requests ability to maintain that 1 sec response as the load increases. The performance (TPS and response times) are defined in SLAs (service level agreements). The capacity assessment of the system is performed at the production staging environment once the application can sustain expected usage though a performance load test and establishes the following among other things

  • the response time at expected usage
  • the usage when the first request exceeds its SLA
  • usage when the application reaches its saturation point
  • the degradation model from the first missed SLA to the applications saturation point
  • While the above definitions can be better understood in terms of concurrent users accessing a web application, in BPEL Processing Engine terms, what we are trying to achieve is improvement to all of the above (response time/TPS/capacity) for one of the very common use case - long running business processes. For such cases the instances takes long time to complete (either because it is waiting for correlated receive or response from the partner services etc) the available memory resources can be effectively increased if these in-memory object related to the long running instances can be persisted and released. These objects can then be recreated whenever the process instance is ready to execute further. Thus for such business processes we should be able to derive following benefits:

  • Increased performance - At full capacity usage; less memory would mean less full garbage collections and better GC throughput
  • Increased Capacity - Ability handle more instances before reaching the saturation limits
  • Better response time/latency

  • JBI/Open ESB Quick Overview

    JBI (JSR 208) is an attempt to solve the integration challenges using declarative model whereby the systems are exposed as web-services (defined by wsdls), communicating with each other using standard message format. This prevents vendor lock in and promotes loose-coupling, flexibility and reuse. JBI define standards for plugging in service engines and binding components.

    Service engine typically is a place where business logic is implemented and it could be BPEL or Workflow Engine, or Transformation such as XSLT Engine. Binding components deal with the protocol specific communications and act as entry and exit points. Components use standardized message format (called Normalized Message) for inter component communication over service bus. Each component listens to a dedicated queue (called Delivery Channel) and it is responsibility of the service bus to route the message to correct destination queue. JBI Specification does not mandate any threading Model. Each component is free to implement its threading model. All messages are exchanged asynchronously. Asynchronous communication model even for synchronous requests allows for loose-coupling and building robust systems with simpler design also allowing for building higher order features such as persistence and recovery.

    BPEL Service Engine Horizontal Scalability Architecture

    The clustering implementation in Open ESB BPEL Service engine does not use intelligent router (also called Sticky router) to keep track of correlated messages. The message router routes the messages randomly to any engine in the cluster and if the message end up on wrong engine, the instance (upon reaching the execution point where it need the correlated message) is then removed from memory and recreated on other engine. In essence, the effect is routing the instance across the engines. This implementation of de-hydrating and recreating the instance leverages the persistence/recovery framework, hence does add minimal complexity.

    Passivation strategy for Scalability & Availability

    Open ESB BPEL Engine’s scalability implementation also leverages persistence/recovery framework. As performance optimization, BPEL Engine persists only certain points during execution (after execution of non-replayable activities, such as partner invoke), and the instance state including variables are persisted in the database. For performance reasons, persistence can be turned off also.

    With persistence turned on, engine checks the available heap during request processing. Should the available memory fall below certain levels (configurable) due to inactive instances (either due to response from a partner or correlating/callback message or due to some timer activity), the running instances are scanned and instances waiting for longer (configurable) than certain period are picked and the in-memory variables swapped out (de-referenced for garbage collection). Note that since variable might be dirty (not persisted yet), additional steps need to be performed to save the state of variable in database. Such out-of-band persisted variables are specially marked so that in the event of crash, the recovery starts from last good state (including state variables). If the first pass does not increase the available heap beyond the defined level, the swap-out time interval is reduced by half and process repeated again recursively.

    For cases where the process is dealing with smaller size messages there might not be appreciable gain in memory due to release of memory held by the variables. In nutshell, The solution kicks in when available heap falls to certain levels. It works in phases based on severity of memory levels.

  • Phase 1 – Variable Passivation (Figure 2)
  • Phase 2 – Instance Passivation (Figure 3)
  • Uses database for persistence of in-memory objects
  • Works for Asynchronous Business Processes only
  • Figure 1 Figure 2 Figure 3

    Tests Performed

    Following are some of results of actual tests performed on some representative process for 1 Mb and 10 Mb load size. The JConsole snapshots (below) capture the memory growth and benefit of the two-phase scalability solution. Also note Variable passivation benefits most when the message size is large, where as Instance passivation helps when message is small but numbers of instances are large.
    Test 1: Large Messages (1 Mb)
    Without Two-Phase Scalability Solution

  • Heap Max at 60 Msgs
  • Total Heap: 512 MB
  • Message Size: 1 MB
  • Figure 4 Results with Variable Passivation
    Total Message Processed: 890+ Figure 5
    Test 2: Small Message (10 Kb)
    Without Two-Phase Scalability Solution
  • Heap Max-out : 3000 messages
  • Total Heap : 512 MB
  • Figure 6
    Results with Variable Passivation Only (Phase 1)
  • Total Message Processed: 7600
  • TPS Dropped/Processing Stopped
  • Figure 7
    Results with Instance Passivation (Phase 2)
  • Total Message Processed: 15,000 +
  • TPS Consistent
  • Figure 8

    Properly Sized application

    This means providing enough hardware for the expected concurrent execution of instances for the representative message size for all business processes. This calculation should also include the number of concurrent variables in the business process definition(s). For example, for single business process case if the expected concurrent execution is 1000 instances and each instance carry a message of 100 KB size* and the process defines 10 distinct variables, than simple calculation would result in 1 GB of heap space. Also, to account for short peaks some buffer need to be provided, as appropriate. * Note that we need multiply this with a factor for in-memory DOM size for xml message.


    For stateful services/components such as BPEL Engine incorporating two phase passivation strategy can help scale the system.

    It is worth reiterating that the solution outlined above is qualified solution for processes involved in asynchronous long running interactions and would not have any benefit for short running synchronous processes. For short running processes you would achieve the scalability by either hardware addition (vertical) or by clustering (horizontal).

    In addition designing scalable business process consideration need to be given to process definition, message size and expected peaks for proper pre-production hardware sizing.


    Opinions expressed by DZone contributors are their own.

    {{ parent.title || parent.header.title}}

    {{ parent.tldr }}

    {{ parent.urlSource.name }}