How to Build a CI/CD Pipeline for Your Enterprise Middleware Platform
Learn about leveraging the advantages of CI/CD in enterprise middleware components for maintainable DevOps.
Join the DZone community and get the full member experience.Join For Free
Continuous integration and continuous deployment (or delivery) a.k.a CI/CD is one of the most talked-about ideas in enterprise software development. With the rise of microservice architecture (MSA), it has become a mainstream process within enterprises. If you are familiar with microservice architecture, you should have heard about greenfield and brownfield integrations, where you start your microservices journey from scratch or from an existing enterprise architecture (which is the case 80% of the time). According to this survey, there are more and more organizations moving ahead with microservices architecture even though they accept that it is really hard to maintain and monitor (of course, there are tools coming up to cover these aspects). This survey showcases that the advantages of MSA are outweighing the disadvantages I mentioned above. CI/CD is a tightly-coupled concept along with MSA and DevOps culture. Due to the dominance of MSA within enterprises, CI/CD has also become an essential part of each and every software development lifecycle within the enterprise.
With this shift in the enterprise towards MSA, DevOps and CI/CD culture, the other parts of the brownfield, cannot stay out of these waves. These “other parts,” consist of
- Enterprise Middleware (ESB/APIM, Message Broker, Business Process, IAM products)
- Application Server (Tomcat, Websphere)
- ERP/CRM software (mainly COTS systems)
- Homegrown software
Sometimes, it might not be practical to implement CI/CD processes for every software component mentioned above. In this article, I’m going to talk about how we can leverage the advantages of CI/CD process within enterprise middleware components.
Let’s start with one of the most common enterprise middleware products, an Enterprise Service Bus (ESB). These ESBs provide the central point which interconnects heterogenous systems within your enterprise and add value to your enterprise data through enrichment, transformation, and many other functionalities. One of the main selling points of these ESBs was that they are easy to configure through high-level Domain Specific Languages (DSLs) like Synapse, Camel, etc. If we are to integrate ESBs with a CI/CD process, we need to consider two main components within the product:
- ESB configurations which implement the integration logic
- Server configurations which install the runtime in a physical or virtualized environment
Of the above two components, ESB configurations go through continuous development and change more frequently. Automating the development and deployment of these artefacts (configurations) is far more critical. The reason is that going through a develop, test, deploy lifecycle for every minor change will take a lot of time for the engineering staff and result in many critical issues if we don’t automate it.
Another important aspect when automating the development process is that we assume that the underlying server configurations are not affected by these changes and are kept the same. It is a best practice to make this assumption because having multiple variables makes it really hard to validate the implementations and complete the testing. The below figure explains a process flow which can be used to implement a CI/CD process with an ESB.
Figure 1: CI/CD with middleware platform (ESB).
The process will automate the development, test, and deployment of integration artefacts.
- Developers use an IDE or an editor to develop the integration artefacts. Once they are done with the development, they will commit the code to GitHub.
- Once this commit is reviewed and merged to the master branch, it will automatically trigger the next step.
- A continuous integration tool (e.g. Jenkins, TravisCI) will build the master branch and create a Docker image along with the ESB runtime and the build artefacts and deploy that to a staging environment. At the same time, the build artefacts are published to Nexus so that they can be reused when doing product upgrades.
- Once the containers are started, the CI tool will trigger a shell script to run the Postman scripts using Newman installed in the test client.
- Tests will run against the deployed artefacts.
- Once the tests have passed in the staging environment, Docker images will be created for the production environment and deployed to the production environment.
The above process can be followed for the development of middleware artefacts, but these runtime versions will get patches, updates, and upgrades more frequently than not given the demands of the customers and the number of features these products carry. We should consider automating the update of this server runtime component as well.
The method in which different vendors provide updates, patches, and upgrades can be slightly different from vendor to vendor, but there are three main methods:
- Updates as patches which need to be installed and restarted the running server
- Updates as new binaries which need to replace the running server
- Updates as in-flight updates which will update (and restart) the running server itself
Depending on the method by which you get the updates, you need to align your CI/CD process for server updates. The following process flow (Figure 2) defines a CI/CD process for server updates which will happen less frequently compared to the development updates.
Figure 2: CI/CD process for server updates.
The process depicted in Figure 2 can be used with any of the update scenarios mentioned in the previous section. Here’s the process flow:
- One of the important aspects of automating the deployment is to extract the configuration files and make them templates which can be configured through an automated process (e.g. shell, Puppet, Ansible). These configurations can be committed to a source repository like GitHub.
- When a new configuration change, update, or upgrade is required, it will trigger a Jenkins job which will take the configurations from GitHub and the product binaries (if required), product updates, and ESB artefacts from a Nexus repository which will be maintained within your organization. Using these files, a Docker image will be created.
- This Docker image will be deployed to the staging environment and start the containers, depending on the required topology or deployment pattern.
- Once the containers are started, the test scripts (Postman) are deployed to the test client and start the testing process automatically (Newman).
- Once the tests are executed and the results are clean, it will go to the next step.
- Docker images will be created for the production environment and deploy the instances to the environment and start the Docker containers based on the production topology.
With the above process flows, you can implement a CI/CD process for your middleware layer. Even though you can merge these two processes into a single process and put a condition to branch out into two different paths, having two separate processes would make it easier to maintain.
If you are going to implement this type of CI/CD process for your middleware ESB layer, make sure that you are using the right ESB runtime with the following characteristics:
- Small memory footprint
- Quick startup time
- Immutable runtime
This Medium post describes a pragmatic approach to moving your middleware layer to microservices architecture, along with your CI/CD process.
This Medium post discusses a practical implementation of a CI/CD process along with WSO2 EI (ESB).
This Medium post discusses a practical implementation of a CI/CD process for WSO2 API Manager.
This GitHub repository contains the source code of a reference implementation.
Opinions expressed by DZone contributors are their own.