The Twelve-Factor App and WSO2 Micro Integrator
We're going to discuss how to apply the Twelve-Factor App methodology while designing and architecting a microservices platform using WSO2 Micro Integrator.
Join the DZone community and get the full member experience.
Join For FreeOver time, people have come up with a solid baseline for standing up your microservices architecture. In 2011 Adam Wiggins [Heroku co-founder] published the Twelve-Factor App methodology for building software-as-a-service based on their own experiences. The concepts described there are platform and language agnostic. However, in this article, we are going to discuss how to apply those principles while designing and architecting a microservices platform using WSO2 Micro Integrator — the famous enterprise integrator product.
Acronyms
MI |
Micro Integrator |
IDE |
Integrated development environment |
API |
Application programming interface |
QOS |
Quality of Service |
VCS |
Version Control System |
JMS |
Java Message Service |
CI/CD |
Continuous Integration & Continuous Development |
LDAP |
Lightweight Directory Access Protocol |
WSO2 Micro Integrator
WSO2 Micro Integrator is a lightweight platform that can be used to build composite microservices.
A typical microservice platform has different layers. At the bottom layer, we find fine-grain microservices. These are designed to perform specific, independent functionalities. In order to deliver a complete business functionality and to give the required QoS to the users, these services need to be integrated together and present as a composite microservice. Those microservices are usually described as coarse grain microservices. To do this microservice integration, WSO2 Micro Integrator can be used. It provides a platform with which you can integrate different services without writing any code. To build the custom integration logic, users can easily use mediator configurations. WSO2 MI comes up with an IDE called Integration Studio which is a powerful editor to build integration configs and export them onto the WSO2 MI runtime.
In his article 'Microservices Layered Architecture,' Kasun has described the layered approach for building microservices. The dotted line shows where WSO2 MI fits in there.
Importance of Applying Twelve-Factor Principles
Beyond theories and concepts, when it comes to the practical world, any engineering stream will have some best practices. These are identified by teams or groups who already have applied the concepts in practice and adopted it in several iterations. Thus for anyone interested in the field can apply those principles to overcome practical bounties which may appear later in the project.
In the same way, investigating if the platform you have chosen to build your plain microservices or composite microservices is architected in a way that you can practice Twelve-Factor principles will be an advantage later in your project. This is because the whole system will be based on that platform and the capabilities it enables you to write your application logic.
Quoting the goals of Twelve-Factor app principles:
Use declarative formats for setup automation, to minimize time and cost for new developers joining the project.
Have a clean contract with the underlying operating system, offering maximum portability between execution environments.
Are suitable for deployment on modern cloud platforms, obviating the need for servers and systems administration.
Minimize divergence between development and production, enabling continuous deployment for maximum agility.
And can scale up without significant changes to tooling, architecture, or development practices.
Even looking at goals you might comprehend how important they would be for a successful project. The rest of this article is devoted to reviewing WSO2 Micro Integrator platform with respect to The Twelve Factors hoping that it will help readers get their projects set for success!
Twelve-Factor Methodology
Factor 1: Codebase
This is very fundamental. It tells you to keep whatever code in a VCS like GitHub. It helps to track the different versions and changes done to the logic. This practice is useful to build CI/CD pipelines for the project as well. The code in the repository is what gets built, tested, and deployed.
The integration logic for WSO2 MI is configured using Integration Studio. The editor allows you to create An Integration Project which contains several sub-modules. Following are the main modules we can find inside an integration project.
Config Module: keeps the integration logic-related configuration. As stated at the beginning, all integration logic is just an XML configuration, not a code. Whatever changes you make to the integration logic are kept in this project.
A Composite Application Module: when integration logic is ready and completed, it needs to be deployed onto the WSO2 MI runtime. This is done by packaging the required artifacts of an Integration Project into a Carbon Application (CApp) using a Composite Application module.
The integration project along with its sub-modules can be VCS controlled. Integration Studio itself gives you the ability to commit and manage the artifacts with a VCS. In that manner, multiple developers of the team can collaborate on building integration logic.
Another important aspect is that the code should not be shared between the applications. You need to have different projects for each composite microservice or for each set of composite microservices that are cohesive meaningfully. If there is an integration logic that is common to many applications, you can make it a separate project. Then upon deployment, you can deploy the CApp out of that logic to every MI instance. Basically, you should not repeat the same code in different applications. (Note: At the design of microservice-platform you need to decide on the boundaries of each application in a way that there are minimal interactions and dependencies between different microservices. Of course, this does not mean that all integration logic should go into a single project!)
You can do releases out of projects and name the CApps with the version appended. In that way, you can also deploy different versions of the same integration logic codebase. The code in the repository should be environment agnostic. It is used to produce a single build, which is combined with an environment-specific configuration to produce an immutable release. In a typical integration project what differs between dev, test, and production environments is the endpoints. In the projects, endpoints should be referred by a key and the actual endpoints need to be deployed by environment-specific CApp or by environment variables. This eliminates the need to have multiple projects with the same code for different environments.
There are workflow tools that can help you manage your codebase till release, such as GitFlow (Release Management Workflow).
Factor 2: Dependencies
This factor specifies that the dependencies should not be packaged into the code, instead it is better to use package/dependency management tools.
Both the Integration Project and Composite Application Project we talked about are Apache Maven-based projects. This means if there are third-party libraries we need to package (i.e Custom Class Mediator) into a CApp, we can add maven dependencies and do that. The source of the custom class mediator will be a separate project.
However, in Integration Studio you cannot define one CApp as a dependency for another Integration project. When they are deployed together to MI runtime, they will work together in the same environment. For example, the CApp containing endpoints of the environment cannot be imported as a dependency to the Integration project containing the integration logic, however, when CApp out of the Integration project and CApp containing the endpoints are deployed together on MI they will refer to each other.
Factor 3: Config
When an application needs to support multiple environments, the configurations play a critical role. There should be a strict separation between config and code. Code should remain the same irrespective of where the application is being deployed, but configurations can vary.
The bottom line of this factor is that environment-dependent properties of the integration logic should go into configurations. When using WSO2 Micro Integrator, while integration logic is contained in the CApp, the environment-specific properties can be fed in different ways.
Use a separate CApp as we described in the above sections. This CApp will contain endpoints, hosts, connection parameters specific to dev, test, and prod environments. At the deployment configuration, we can specify to pull the CApp relevant to the deployment environment and trigger the build.
Use environment variables to specify parameters specific to environments. Please refer to Injecting Parameters as Environment Variables section of WSO2 MI documentation to see possible options. If you are deploying WSO2 MI on top of Kubernetes Engine (or in managed Kubernetes platforms like GKE), you can use ConfigMaps. This lets you bind environment variables, port numbers, configuration files, command-line arguments, and other configuration artifacts to your pods' containers.
Specify in deployment.toml configuration file. This file contains all the configurations related to a WSO2 MI instance ranging from port offsets to runtime performance parameters. If you use container-based deployment, this file can be mounted in order to externalize it (i.e in Kubernetes you can use configMaps). If you do VM deployments, configuration management can be done using tools such as Chef, Puppet, and Ansible.
An easy way to check whether the configuration has been externalized correctly is to see if the code (integration logic) can be made public without revealing any credentials.
Factor 4: Backing Services
This factor insists the developers treat backing services as attached resources. Backing services mean supporting infrastructure and other services the application communicates with. Examples are
Databases.
Message Broker.
Authorization APIs.
LDAP servers.
The idea is you should be able to change the supporting services without changing the code. With WSO2 MI, the integration logic contained in the CApp whereas all the configurations related to the above services are externalized. You can change them using the methods we discussed under Factor 3: Config.
Another aspect is, you must be able to easily swap the backing service from one provider to another without code changes. WSO2 MI has used the 'coding to Interfaces' or 'Façade' approach wherever required to achieve this. For example, when you use JMS transport to communicate with IBM MQ, the same transport can be used to communicate with the ActiveMQ broker. This is because the JMS transport is designed on top of the Java Message Service interface (JMS) provided by java. It means that you can switch JMS-supported messaging providers just by changing the transport configurations, no need to change CApp or the Docker image (if you are running on containers). However, this is not something always possible with any backing service.
WSO2 MI supports almost all production-grade databases which you can configure using deployment.toml configuration file. The authorization server config is also externalized so that you can point to any on-premise or cloud authorization server.
Factor 5: Build, Run, Release
For the fifth factor, a twelve-factor application requires the strict separation between the Build, Release, and Run stages.
Build:
To build an executable bundle, the Build phase takes code from VCS. In the context of WSO2 Integration development, this stage is about building the integration project and creating a deployable CApp. Integration project and composite application project is checked in and Maven build is run to generate the CApp. Ideally, this stage should also take care of executing all Unit Tests available in an application. WSO2 Integration Studio provides a unit test framework that you can use to write a unit test suite related to your integration logic. If tests fail, the whole process should get stopped thereon. This section can be considered as Continuous Integration (CI) where the project is pulled from VCS and built whenever a team member merges a change to the integration logic.
Release:
The executable is combined with environment-specific configurations. It is then assigned a unique release number and made ready to execute on the environment. If you are setting up a microservices-based runtime using WSO2 MI, in this step you will build a Docker image with the CApp containing the integration logic along with the CApp containing environment-specific endpoints copied. You can give a unique name and tag to the image so that it is uniquely identified. You can use plain Docker commands or create a Docker project with WSO2 Integration Studio to achieve this. Note that this project also needs to reside at a VCS.
Further, if you are deploying on Kubernetes, environment-specific configs can be kept in different Git repositories. The Kubernetes deployment project can be configured to pull relevant configs specific to the environment when running. WSO2 facilitates creating K8s deployment using K8s operators. Please have a look at the Kubernetes Project type provided by WSO2 Integration Studio.
Run:
Using the necessary execution commands, the package is finally executed in an environment. This phase can be considered as Continuous Deployment. You can invoke the system once the deployment is complete.
Every deployment, whether Dev, QA, Stage, or Prod needs to follow all stages mentioned earlier. Once the CI/CD pipeline is set, all the steps up to the deployment are automatic without any human intervention. This makes it quick and reliable. Note that the only change between Dev, QA, Stage, and Prod environments are environment-specific configurations — all other integration logics should be similar.
Tools that would help here would be Git, Jenkins, Git Hooks, Maven, etc.
Factor 6: Stateless Processes
This factor is focused on running your application as one or more stateless processes.
One of the fundamentals is that your application should be able to scale horizontally when the number of incoming requests are high. Container management platforms like Kubernetes have auto-scaling features. Generally, a load balancer is placed in front of all the instances and the requests from clients are distributed. There is no guarantee that a request from a particular client is always directed to a particular application instance, rather it could be any. Hence, the processing of a particular request should not rely on data from a previous request. Keeping state in-memory also not going to work as it is not replicated to other nodes and a particular container may stop, shutdown, or kill at any time.
If you need to keep a state, that should be kept at a Backing Service like a database. Avoiding sticky sessions you can set up scalable caches like Memcached or Redis. WSO2 Micro Integrator does not keep states for HTTP messages. However, for polling transports like File reading, Message Broker Topics, Message processors where coordinated access is needed, one node out of the whole cluster will process the message with failover features. The state is kept in a database in this case.
Factor 7: Port Binding
In non-cloud environments, web apps are often written to run in-app containers such as GlassFish, Apache Tomcat, and Apache HTTP Server. The twelve-factor app is completely self-contained. In other words, the webserver library is a part of the app itself. WSO2 MI is runnable as a standalone server with the help of the 'WSO2 Carbon' runtime. All the transports supported by WSO2 Micro Integrator operate without the need for any external dependency.
If you built a service, make sure that other services can treat this as a resource if they wish (discussed earlier). Usually, the services are exposed through a port. It is an architectural best practice to make this port configurable using an environment variable or an external configuration.
WSO2 MI serves HTTP traffic on 8290 and HTTPs traffic on 8253 which is configurable in deployment.toml file.
You can configure a port offset in deployment.toml file which will add a defined offset to every port WSO2 MI serves on.
When you need to expose an API on an arbitrary port, you can use an Inbound HTTP Endpoint. The port can be configured in the configuration.
Port binding is an important concept when it comes to constructing platform-as-a-service models. This makes WSO2 MI a candidate to build a cloud-based integration platform.
If you are deploying WSO2 MI on the Kubernetes environment, as it has built-in service discovery, you can abstract port bindings by mapping service ports to containers. Service discovery is accomplished using internal DNS names.
Factor 8: Concurrency
Concurrency and processes are an important consideration in the twelve-factor app. What needs to be avoided here is making a monolith application that can only be vertically scaled. It states to break your application into small units so that they can be independently scaled horizontally. Then the system can be scaled to varying loads. Factor 6 makes this possible as the application units are stateless.
WSO2 Micro Integrator does not keep a state of the messages it is processing. Hence once you break the whole application into small units and create separate CApps for them, it is possible to create different WSO2 Micro Integrator-based images and run them as containers. Then even if you do not know the load you will be getting in production at the design phase of the system, it is possible to scale it to the future load requirements.
WSO2 Micro Integrator runtime is designed with the following factors in mind.
Concurrency-centered design: connection pooling, task pooling concepts are used in transports to handle concurrent loads. Every mediator (mediation logic elements) is also designed to process two messages concurrently in complete isolation. Please refer to WSO2 documentation here to understand how to performance tune each transport.
The platform does not take a lot of resources: it allows the real integration logic to process messages taking container resources.
Lightweight in size: WSO2 MI-based image is not bulky.
If you deploy WSO2 MI-based Docker images on Kubernetes, it natively provides auto-scaling features. Please check this document.
Factor 9: Disposability
Processes in twelve-factor apps should be started or stopped in minimal time. This is important to construct scalable systems. Otherwise, in a failure recovery situation or at a scaleup situation to handle a load spike, there will be adverse effects on the overall stability of the system. Even if the system would work without this Factor, it would diminish the reliability and that is not a sign of good software.
Graceful shutdowns are also important because then only that port would be freed up to spawn another container.
WSO2 Micro Integrator-based containers will start up at most within 5 seconds. Upon graceful shutdown, all accepted connections are processed by WSO2 MI so that users will not be affected because of container shutdown. However, there are some cases like WebSocket data streams at which the client system will have to retry seamlessly without the knowledge of the user when the connection is lost. WSO2 MI closes connections to backing systems (database systems, message brokers etc) properly when shutting down gracefully so that when it starts up again there are no issues in terms of consistency.
This factor indirectly states to have resiliency and auto-scaling which is generally easy to achieve with container orchestration platforms like Kubernetes.
Factor 10: Dev-Prod Parity
There should be a minimal difference between development, test, and production setups. Even if we state these should be similar except for environment-specific configurations, practically these environments differ due to various reasons.
The time gap: A developer may work on code that takes days, weeks, or even months to go into production.
The personnel gap: Developers write code, ops engineers deploy it.
The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.
The twelve-factor app is designed for continuous deployment by keeping the gap between development and production small.
The takeaway from this factor is that encouraging a continuous deployment model and encouraging a DevOps culture in your organization would lead to building a successful integration system. If you focus and achieve previous factors, this factor should be straightforward.
Also, when setting up Dev, Test, and Prod environments developers tend to use different backing services or mock the backing services in the lower environments. The twelve-factor developer resists the urge to use different backing services between development and production. The reason is small differences between them can cause issues in higher environments.
Following are some tools and platforms you can use to build a CI/CD process for WSO2 MI.
Github — to store integration project.
ECR — docker registry to keep images.
Jenkins/Github — to build artifacts.
Docker — build images.
KMS — store your cryptographic keys in one central.
Managed Kubernetes services like AWS EKS, Google GKE, or Microsoft AKE — to run containers.
Please refer to WSO2 documentation here to get an idea on how to setup a CI/CD flow using Helm charts (package manager for Kubernetes).
Factor 11: Logs
Logs provide visibility and health information of a running application. It's important to decouple the collection, processing, and analysis of logs from the core logic of your apps. WSO2 Micro Integrator is using Log4j2 with asynchronous logging capabilities. Hence the logging activities impose a minimal overhead to the message processing within the WSO2 MI core.
Logs can be considered as an output stream of aggregated, time-ordered events (usually one event per line in the text). A twelve-factor app never concerns itself with routing or storage of its output stream. It can be written to stdout or steam to an external platform. WSO2 MI logging configuration can be done with log4j2.properties file. By default, it is configured to store logs in wso2carbon.log file and to write to stdout. You can configure it to stream logs to Splunk or Logstash/ELK Stack instead by writing a custom log publisher and adding it to the runtime. If you are using WSO2 MI-based images running in containers, this file can be config-mounted so that as soon as this file is externally changed, changes will get applied to the WSO2 MI runtime without restarting the container.
If you are having a number of microservices, it is essential that you configure centralized logging. You cannot get a full picture of what is going on with the system looking into the individual log files of each container. Further, we need to externalize storing of logs to achieve auto-scaling capabilities. On a side note, you can run a logging agent (FluentD, Fluentbit, Filebeat) to read WSO2 MI log files and send log events to ElasticSearch as a sidecar in each WSO2 MI pod as well.
When using Kubernetes, you can push logs using a Daemonset which will only create a Daemon process per K8S minion node rather than using a sidecar which will create a container per application. Please read the article here for more information.
WSO2 MI is capable of generating logs per proxy service or per API. Those logs will be stored in separate log files. This will make analysis easy when you have a lot of APIs or proxy services.
Trends, alerts, heuristic, monitoring - all of these can come from well design logs, treated as event streams. WSO2 Streaming Integrator is useful for analyzing event streams and generating real-time notifications. Following documentations contain information on that approach.
Log indexing systems as Splunk can generate useful information like the below as a part of the log analysis.
Finding specific events in the past.
Large-scale graphing of trends (such as requests per minute).
Active alerting according to user-defined heuristics (such as an alert when the quantity of errors per minute exceeds a certain threshold).
Factor 12: Admin Processes
You need to maintain your services after the development and launching to production. Administrative processes usually consist of one-off tasks or timed, repeatable tasks such as generating reports, executing batch scripts, starting database backups, and migrating schemas.
Usually, admin processes run against a release, using the same codebase and config as any process run against that release. It is a good idea to ship the admin code with the application code. For example, DB migration scripts are shipped within a WSO2 Micro Integrator release.
Twelve-factor strongly favors languages that provide a REPL shell out of the box, and which make it easy to run one-off scripts. In a production deployment, developers can use ssh or other remote command execution mechanism.
Conclusion
The concepts introduced with the twelve-factor app are not completely novel, however, they are more relevant for a microservices architecture, because without those best practices being applied, you cannot run a set of interconnected services. You will perceive the value of these when the number of microservices in the system increases. Hence, it is worth reading the 12 Factor App methodology in detail.
WSO2 Micro Integrator, a tool to design composite microservices, can be set up according to the baseline set by Twelve-Factor App. In this article, we discussed each factor in detail along with the features of WSO2 MI that facilitate it. They will be helpful for any team architecting a microservices-based system using WSO2 MI. Starting the project with a solid foundation will definitely help your journey.
Opinions expressed by DZone contributors are their own.
Comments