1. Use Bounded Contexts to Identify Candidates for Microservices
Bounded contexts (a pattern from domain-driven design) are a great way to identify what parts of the domain model of a monolithic application can be decomposed into microservices. In an ideal decomposition, every bounded context can be extracted out as a separate microservice, which communicates with other similarly-extracted microservices through well defined APIs. It is not necessary to share the model classes of a microservice with consumers; the model objects should be hidden as you wish to minimize binary dependencies between your microservices.
The use of an anti-corruption layer is strongly recommended during the process of decomposing a monolith. The anti-corruption layer allows the model in one microservice to change without impacting the consumers of the microservice.
Figure 3: Bounded contexts for a shopping cart application
2. Designing Frontends for Microservices
In a typical monolith, one team is responsible for developing and maintaining the frontend. With a microservices-architecture, multiple teams would be involved, and this could easily grow into an unmanageable problem.
This could be resolved by componentizing various part of the frontend so that each microservice team can develop in a relatively isolated manner. Each microservice team will develop and maintain it's own set of components. Changes to various parts of the frontend are encapsulated to components and thus do not leak into other parts of the frontend.
Figure 4: Web Components for various front-end modules
3. Use an API Gateway to Centralize Access to Microservices
The frontend, acting as a client of microservices, may fetch data from several different microservices. Some of these microservices may not be capable of responding to protocols native to the web (HTTP+JSON/XML), thus requiring translation of messages from one protocol into another. Concerns like authentication and authorization, request-response translation among others can be handled in a centralized manner in a facade.
Consider using an API Gateway, which acts as a facade that centralizes the aforementioned concerns at the network perimeter. Obviously, the API Gateway would respond to client requests over a protocol native to the web, while communicating with underlying microservices using the approach preferred by the microservices. Clients of microservices may identify themselves to the API Gateway through a token-authentication scheme like OAuth. The token may be revalidated again in downstream microservices to enforce defense in depth.
Figure 5: Use an API gateway as an intermediary
4. Database Design and Refactorings
Unlike a monolithic application, which can be designed to use a single database, microservices should be designed to use a separate logical database for each microservice. Sharing databases is discouraged and is an anti-pattern, as it forces one or more teams to wait for others, in the event of database schema updates; the database essentially becomes an API between the teams. One can use database schemas or other logical schemes to create a separate namespace of database objects, to avoid the need to create multiple physical databases.
Additionally, one should consider the use of a database migration tool like Liquibase or Flyway when migrating from a monolithic architecture to a microservices architecture. The single monolithic database would have to be cloned to every microservice, and migrations have to be applied on every database to transform and extract the data appropriate to the microservice owning the database. Not every object in the original database of the monolith would be of interest to the individual microservices.
5. Packaging and Deploying Services
Each microservice can in theory utilize its own technology stack, to enable the team to develop and maintain it in isolation with few external dependencies. It is quite possible for a single organization to utilize multiple runtime platforms (Java/JVM, Node, .NET, etc.) in a microservices architecture. This potentially opens up problems with provisioning these various runtime platforms on the datacenter hosts/nodes.
Consider the use of virtualization technologies that allow the tech stacks to be packaged and deployed in separate virtual environments. To take this further, consider the use of containers as a lightweight-virtualization technology that allows the microservice and its runtime environment to be packaged up in a single image. With containers, one forgoes the need for packaging and running a guest OS on the host OS. Following which, consider the use of a container orchestration technology like Kubernetes, which allows you to define the various deployments in a manifest file, or a container platform, such as OpenShift. The manifest defines the intended state of the datacenter with various containers and associated policies, enabling a DevOps culture to spread and grow across development and operations teams.
6. Event-driven architecture
Microservices architectures are renowned for being eventually consistent, given that there are multiple datastores that store state within the architecture. Individual microservices can themselves be strongly consistent, but the system as a whole may exhibit eventual consistency in parts. To account for the eventual consistency property of a microservices architecture, one should consider the use of an event-driven architecture where data-changes in one microservice are relayed to interested microservices through events.
A pub-sub messaging architecture may be employed to realize the event-driven architecture. One microservice may publish events as they occur in its context, and the events would be communicated to all interested microservices, that can proceed to update their own state. Events are a means to transfer state from one service to another.
7. Consumer-Driven Contract Testing
One may start off using integration testing to verify the correctness of changes performed during development. This is, however, an expensive process, since it requires booting up a microservice, the microservice’s dependencies, and all the microservice’s consumers, to verify that changes in the microservice have not broken the expectations of the consumers.
This is where consumer-driven contracts aid in bringing in agility. With a consumer-driven contract, a consumer will declare a contract on what it expects from a provider.