Multiple microservices can be composed with each other to provide composite microservices. Some common design patterns are explained below.
Results from multiple microservices are aggregated into one composite microservice.
In its simplest form, an Aggregator would be a simple web page that invokes multiple services to achieve the functionality required by the application. Since each service (Service A, Service B, and Service C) is exposed using a lightweight REST mechanism, the web page can retrieve the data and process/display it accordingly. If processing is required—for example, if you need to apply business logic to the data received from individual services—then you will likely need a bean to transform the data before being displayed by the Aggregator web page.
Figure 3: Aggregator Pattern - To see Figure 3, Download the PDF
An Aggregator can also act simply as a higher-level composite microservice which can be consumed by other services. In this case, the Aggregator would collect the data from each individual microservice, apply business logic to it, and publish it as a REST endpoint.
This design pattern follows the DRY principle—if there are multiple services that need to access Service A, B, and C, then you should abstract that logic into a composite microservice and aggregate that logic into one service. An advantage of abstracting at this level is that the individual services (i.e. Service A, B, and C) can evolve independently, and the needs of the business logic are still provided by the composite microservice.
The Proxy microservice design pattern is a variation of the Aggregator. In this case, aggregation does not need to happen client-side. Rather, a different microservice may be invoked as required by the business logic.
Just like in the Aggregator pattern, a Proxy can scale independently on the X-axis and Z-axis. You may want to do this in cases where each individual service does not need to be exposed to the consumer and should instead go through an interface.
Figure 4: Proxy Pattern - To see Figure 4, Download the PDF
A Proxy can be classified in one of two ways. A dumb proxy just delegates any request to one of the services. Alternatively, a smart proxy applies some data transformation before the response is served to the client. A good example of this would be where the presentation layer to different devices can be encapsulated in the smart proxy.
The Chained microservice design pattern produces a single consolidated response to a request. In this case, the request from the client is received by Service A, which then communicates with Service B, which in turn may communicate with Service C. All of these services are likely using a synchronous HTTP request/ response messaging.
Figure 5: Chained Pattern - To see Figure 5, Download the PDF
One important thing to understand here is that the client is blocked until the complete chain of request/response (i.e. Service A ↔ Service B and Service B ↔ Service C), is completed. The request from Service B to Service C may look completely different from the request from Service A to Service B. Similarly, response from Service B to Service A may look completely different from Service C to Service B. And that’s the whole point; different services are adding their own value.
This means it’s important to remember not to make the chain too long because the synchronous nature of the chain will appear like a long wait at the client side—especially if it’s a web page that is waiting for the response to be shown. There are workarounds to the blocking caused by this request/response, which are discussed in a subsequent design pattern.
Note: A chain with a single microservice is called singleton chain.
The Branch microservice design pattern extends the Aggregator design pattern and allows simultaneous response processing from two (likely mutually exclusive) chains of microservices. This pattern can also be used to call different chains, or a single chain, based upon the business logic needs.
Figure 6: Branch Pattern - To see Figure 6, Download the PDF
Service A—either a web page or a composite microservice— may invoke two different chains concurrently, resembling the Aggregator design pattern. Alternatively, Service A may invoke only one chain, based on the request received from the client.
One of the design principles behind microservices is autonomy. This means the service is full-stack and has control of all the components—UI, middleware, persistence, transactions. This allows the service to be polyglot, so the right tool can be used for the right job. For example, if your application uses some data that fits naturally in a graph store, while other data fits naturally in a relational database, you can use the appropriate storage model for each domain, rather than jamming everything into a SQL or NoSQL database.
However, a typical problem (especially when refactoring from an existing monolithic application) is database normalization: ensuring that each microservice has the right amount of data— nothing less and nothing more. Even if only a SQL database is used in a monolithic application, denormalizing the database would lead to duplication of data, and possibly inconsistency. In a transition phase, some applications may benefit from a shared data microservice design pattern.
Figure 7: Shared Resources Pattern - To see Figure 7, Download the PDF
Some microservices, likely in a chain, may share caching and database stores. This only makes sense if there is a strong coupling between the two services. Some people might consider this an anti-pattern, but business logic needs might require it in some cases. This would certainly be an anti-pattern for greenfield applications implementing a microservices design pattern.
While the REST design pattern is quite prevalent, and well understood, it has the limitation of being synchronous, and thus blocking. Asynchrony can be achieved, but must be done in an application-specific way. Because of this, some microservice architectures may elect to use message queues instead of REST request/response.
Figure 8: Async Messaging Pattern - To see Figure 8, Download the PDF
In the preceding design pattern, Service A may call Service C synchronously, while Service C is communicating with Service B and D asynchronously using a shared message queue. Service A → Service C communication could also be asynchronous, possibly using WebSocket, to achieve the desired scalability.
A combination of REST request/response and pub/sub messaging may be used to accomplish the business logic need.