Scaling Services with Software Pipelines: A Look at Service Design with the SPOC Methodology
Join the DZone community and get the full member experience.
Join For FreeAs many architects are well aware, SOA offers incredible flexibility for business applications. However, this flexibility often comes at the cost of performance, especially when compared to more monolithic architectures. Software pipelines offers a new look at parallel computing, specifically designed for service-oriented processing requirements with an emphasis on increasing scalability and cleanly separating the design and development of business service components from deployment performance.
This article authored by by Cory Isaacson
and published in The SOA Magazine (www.soamag.com) includes excerpts from the new book "Software Pipelines and SOA: Releasing the Power of Multicore Processing" [REF-1} and covers some of the design aspects of pipeline technology, including how the architecture can be applied to an example banking application. It also provides a brief tour of the companion pipelines methodology, showing how to "do the math" to estimate, predict and optimize pipelines during the design phase.
Introduction
Software pipelines is a new concept for enabling scalable processing for service-oriented applications. The fundamentals of this technology platform have been covered in previous articles [REF-2]. This time around we'll be providing a brief tour of pipeline design elements that are part of the companion methodology known as the Software Pipelines Optimization Cycle (SPOC). Specifically, we will focus on the pipeline design stage of the SPOC cycle.
SPOC provides an organized approach to optimization of service-oriented applications. SPOC is designed to be complementary to other development methodologies that you may have in place, concentrating specifically on how to implement software pipelines for your business applications. The techniques for optimization presented here are based on Pipelines Law, a straightforward mathematical basis for estimating, predicting and maximizing the performance of service-oriented applications. For illustration purposes, the SPOC examples use a fictional company: the Pipelines Bank Corporation (PBCOR).
Designing Pipelines
During the pipelines design stage we aim to form the detailed technical foundation for a given pipelines implementation, which makes this the most important phase of the entire SPOC process. This step is where you put pipelines theory and Pipelines Law into action and apply the technology directly to the application. As such, it is crucial to "do the math" so you can successfully define exactly how your implementation will operate.
Let's start with the report overview for Pipelines Bank Corporation (PBCOR), our fictional example used to illustrate the overall process and the sequence of the sub-steps. This report is part of the sample SPOC output used in an actual project.
The purpose of Pipelines Design is to determine the best method for implementing the pipelines architecture.Figure 1 provides a cycle diagram that outlines the sub-steps for this phase of SPOC.
Figure 1: Cycle diagram that outlines the sub-steps for this phase of SPOC
In this step our main goal is to design for performance and scalability, for which we need parallel computing. However, we are dealing with business applications. Business applications present unique challenges when compared to other types of computing:
- Business applications must often process transactions in a certain sequence.
- Short-lived transactions are the backbone of most business applications and must often be completed in milliseconds. This is very different from typical computation-intensive parallel applications, which often deal in processes that require hours (or even days) to complete.
- Business applications must be able to change rapidly to accommodate new market and economic conditions. Therefore, they require a higher level of flexibility than other computing applications.
There are many approaches to parallel computing, such as multithreaded application servers, grid computing, and clustering. However, these mechanical "automatic" methods are not suited for the unique requirements of business applications. Our target applications are business applications, with all of the requirements in the preceding list. Therefore, to meet these requirements, we will use the software pipelines architecture.
Another goal for our team is to be able to predict ultimate application performance during the design phase. All too often, organizations optimize their applications in crisis mode, addressing one bottleneck at a time, only to be faced with another one somewhere else in the system. To address this issue, we use the SPOC methodology, which enables us to plan for the future as we go through the design process.
Let's now explore some of the steps involved with the designing pipelines stage that is part of the SPOC cycle:
Define Service Flow Design
Software pipelines is a service-oriented approach to business applications. To explain how this works, we should first clarify what we mean by "service-oriented." Our definition of a service goes well beyond the paradigm for a typical Web service:
A service is any discrete software component that can be independently invoked by whatever method is appropriate for a given application. It uses messages for its input and output. When a service component receives a message, it consumes and processes the message, then invokes the next service in the flow, to which it sends the processed message. You can run a service locally on a single server by using a direct method call, or across a network by using one of a variety of protocols (Web services, RMI, CORBA, sockets, etc.).
The following figures show a system using a service-oriented approach. The system has service components that exchange messages within one server. It also has a service that exchanges messages between servers across the network.
A given service can run on any computer platform, or on many platforms at the same time. It can also run as a single instance or across thousands of instances (Figures 2, 3).
![]() |
![]() |
![]() |
|
Figure 3 |
Earlier in the SPOC process, the current process flow of an application is mapped out. In the current step the service flow is defined. What's the difference? There are only a couple of differences, but they're important:
- The current process flow is just that - the way your application works today, not necessarily how you want it to work in the future.
- The process flow typically shows only the components, not the messages that pass between the components
Messaging is a key delineator of a service-oriented architecture. That may not sound like a huge difference, but it brings a whole new set of capabilities into the arsenal of the development team, and it allows them to do what they do best - build functionality without getting mired down in the details of deployment and operations. By combining messaging with software pipelines, for the first time the apparent conflict between the goals of flexibility and scalability for business applications is resolved.
Because of its service orientation, software pipelines allows a great degree of independence between the components that do the work (the services) and the framework that executes the components. In other words, service processing is independent from service invocation, and you can easily execute service components in parallel, completely independent of the services themselves.
The following section shows the example SPOC report containing the PBCOR results for this step.
Define Service Flow Design
The demand deposit team plans to consolidate the Automated Teller Machine (ATM) and Debit Card Services (DCS) transaction processing onto the Java platform. By making this change, we can use one service flow design for each transaction type. The service flow will use a single message type, a Java class (POJO, Plain Old Java Object) named com.pbcor.message.AccountTrans.
The AccountTrans message contains the transaction type, the authentication information, and fields for the actual transaction. By using a generic message type, we can perform any type of transaction and use any protocol to invoke the service from our Pipelines Framework. igure 4 shows the high-level service flow for the ATM and DCS applications
![]() |
Figure 4: high-level service flow for the ATM and DCS applications |
The specialized protocol handlers for ATM and DCS transactions are providing satisfactory performance. We expect this performance to continue into the future; therefore, we are leaving these components intact. Instead, we are focusing on the Process Transaction step. Figure 5 shows our new service flow for Process Transaction.
![]() |
Figure 5: our new service flow for Process Transaction. |
In this new flow, the Debit Account and Credit Account components support both the ATM and DCS applications. Account Transfer applies only to ATM transactions. Check Balance is used as a single transaction by the ATM application, but both ATM and DCS use it to validate each transaction when required.
The sequence of the service flow is as follows:
- Protocol handlers deliver their messages in proprietary formats.
- Incoming messages are packaged into the AccountTrans message.
- AccountTrans is sent to a downline component.
- The receiving component processes the transaction.
Because the Debit Account transaction is the most frequently used transaction, we will focus our optimization efforts on it during the design phase. It will be the first service flow we implement using the pipelines architecture.
Figure 6 displays the details for Debit Account's service flow, which uses the AccountTrans message as input/output between services.

Figure 6: Details for Debit Account's service flow
Now that we've seen how the pipelines services are designed, let's jump ahead to show how we define and optimize the actual pipelines themselves. Pipelines are used to actually run the service components, and provide a highly flexible deployment paradigm for this purpose. This is covered in Step 3.7 of SPOC.
Define/Optimize Pipelines Design
In the final step of Pipelines Design, you'll combine the results from all previous steps into a practical, concrete design you can deploy. You're going to accomplish two goals in this phase:
- Design the final layout of your services, Pipeline Distributors, and pipelines.
- Use Pipelines Law to validate your design for each service flow.
To do this step, you'll need the ability to confirm and predict the flow rate of your application. We'll show you how to do that with a set of formulas, in which you calculate the processing rate of the entire flow. In this phase, we calculate the processing rate taking the Pipeline Distributors into account. The Pipeline Distributor is the component which receives incoming messages, and places those messages onto individual pipelines for processing. This inevitably adds overhead to the process, and therefore its critical to understand and analyze the overall performance. To create a precise design, you must include the impact from the distributors.
We'll use the example service flow shown in Figure 7 to illustrate how the formulas work.
Figure 7: Illustrate how the formulas work
In our example flow, Service A receives input messages and sends them to Service B. Order of processing is mandatory; the application must call Service A before it calls Service B. Notice that Service A can process 1000 TPS, whereas Service B has a capacity of only 400 TPS. Let's use the pipelines formulas to see how this service performs without pipelines. The formula for individual components is:
tT = 1/TPS * 1000
And the formula for the entire flow is
FlowTPS = (1/Σ(tT1, tT2,…tTn)) * 1000
Let's do the calculations:
[Service A rate at 1000 TPS] 1 ms = 1/1000 * 1000
[Service B rate at 400 TPS] 2.5 ms = 1/400 * 1000
[FlowTPS without pipelines] 285 TPS = (1/(1 ms + 2.5 ms)) * 1000
We'd like to increase the rate of our example flow, which is pegged at 285 TPS, so let's look at some ways to optimize it. We'll use the following three methods:
- Pipeline the downstream service.
- Pipeline each of the services independently.
- Pipeline the entire service flow.
In the next section, we're going to see the effect of the first option, adding pipelines to the downstream service.
Pipeline the Downstream Service
The simplest way to optimize our flow is to pipeline Service B. We'll add a Pipeline Distributor, which you can see in Figure 8.
Figure 8
In our new design, Pipeline Distributor 1 receives messages from Service A and distributes them to Service B. Order of processing is still mandatory; the application must call Service A before it calls each instance of Service B.
Now we want to know what the rate is when the flow includes Distributor 1, which increases the flow's overhead. First, we'll get the rate for the distributor plus one instance of Service B, but without Service A (you'll see why we do this shortly):
[Distributor 1 rate at 12,000 TPS] .083 ms = 1/12,000 * 1000
[Service B rate at 400 TPS] 2.5 ms = 1/400 * 1000
[Distributor 1 + Service B rate] 387 TPS = (1/(.083 ms, 2.5 ms)) * 1000
If you look at the illustration again, you'll see we designed the service with three pipelines; each pipeline runs an instance of Service B. We want to know the distributor's pipelined rate, in other words, the rate for the distributor plus its downstream services. The downstream services are the three pipelines, each going to one instance of Service B. To get the pipelined rate, we multiply 387, the TPS for the distributor plus one service, by three, the number of pipelines. The formula is
ServiceTPS = TPS * NumberOfPipelines
Therefore, the ServiceTPS for the distributor plus its downstream services is
1161 TPS = 387 * 3 pipelines
And now we can calculate the rate of the entire pipelined flow. To do this, we'll use the combined formula:
FlowTPS = 1/Σ(1/TPS1, 1/TPS2,…1/TPSn)
We won't use individual values for the distributor and Service B; instead, we'll use the ServiceTPS (distributor plus three downstream services) we just calculated. Therefore, the rate of the pipelined flow, with Service A (at 1000 TPS) and ServiceTPS (at 1161 TPS) is
[FlowTPS with distributor + pipelines] 537 TPS =1/Σ(1/1000, 1/1161)
Notice that even when we added three pipelines for the slowest component, throughput increased only from 387 TPS to 537 TPS. That's because the flow is sequential, and because each component adds processing time. Table 1 shows how more pipelines affect the rate.
Table 1 How more pipelines affect the rate.
As you can see, even when we add ten pipelines, we still increase the rate by only 2.8X. Service A hasn't changed; its rate is 1000 TPS. Therefore, no matter how fast we push Service B (toward a theoretical rate of zero), we can't ever go beyond Service A's speed; we can only approach it. You can see how this works in the chart displayed in Figure 9, which shows the performance curve for adding more pipelines.
Figure 9: The performance curve for adding more pipelines
This solution might be acceptable for some applications, but we're illustrating a specific point here: Potential scalability depends on the relative performance of your components. For example, if Service A has a capacity of 10,000 TPS, the effect of adding pipelines to Service B is much greater. Without pipelines, FlowTPS is 387 TPS. With three pipelines, FlowTPS is
[FlowTPS with three pipelines] 1040 TPS = 1/Σ(1/10,000, 1/1161)
We've increased the rate 2.6X. With five pipelines, we can increase the rate to over 4X.
Keep this point in mind as you design your implementation. When you apply pipelines to a service component, remember that the other service components in your flow will affect your result. This is the principle originally quantified in Amdahl's Law, which we're now using in pipelines theory to solve problems in multicore computing.
If the upstream component performs 1X to 2X faster than the downstream component, pipelining the downstream component makes a bigger impact. In the same vein, your pipeline distributor should run 1X to 2X faster than the component(s) it supplies. This will give you plenty of room for adding more pipelines in the future as your demand increases.
Conclusion
As you have seen from this brief tour, software pipelines and SPOC provide a method of addressing the scalability concerns of service-oriented solutions. Further, the methodology gives you the ability to estimate, predict and optimize a pipelines implementation, allowing you to assess the various alternatives available to you at design time. Using the formulas presented in this article, a number of different performance situations can be evaluated.
References
[REF-1] "Software Pipelines and SOA: Releasing the Power of Multicore Processing" by Cory Isaacson, Prentice Hall 2009, www.softwarepipelines.org
[REF-2] The SOA Magazine Issue V: High Performance SOA with Software Pipelines, www.soamag.com/I5/0307-1.asp, Issue VIII: Software Pipelines in the Real World: Two SOA Performance Case Studies, www.soamag.com/I8/0607-3.asp, Issue XI: Software Pipelines Theory: Understanding and Applying Concurrent Processing, www.soamag.com/I11/1007-3.asp
This article was originally published in The SOA Magazine (www.soamag.com), a publication officially associated with "The Prentice Hall Service-Oriented Computing Series from Thomas Erl" (www.soabooks.com). Copyright ©SOA Systems Inc. (www.soasystems.com)
Published at DZone with permission of John nbharti. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments