Join the DZone community and get the full member experience.Join For Free
You may also like: Gaining a Systemic View of Immutable Infrastructure Tooling
Over the last few years, a lot has been changed. We changed our development methodology from Agile to DevOps, we moved from Continuous Integration era to Continuous Delivery/Deployment world, we give more importance to measurement using Insights and KPI’s and a lot of credit for this goes to the following parameters:
People With the Right Mindset — This helped teams to collaborate, coordinate and work together to deliver services that customer wants and drive business results.
The Project With the Right Processes — This helped to ensure planning, risk assessment is done to get things “FIRST TIME RIGHT”. This does not just improve the work, this also improves how you work by increasing feedback loops, enabling rapid iteration and innovation all while improving stability.
Automated Deployments — This helped to ensure no artifacts are deployed manually across environments. With deployment pipelines in place, binaries once build in DEV is deployed across all environments to Production.
This is not enough to have a full-proof error-free delivery. The answer lies in how you manage your infrastructure. Some of the major challenges we face today while building an efficient, stable and reliable infrastructure are:
Predictability — How to ensure the same components and artifacts which were built and tested in non-prod environments are getting promoted to production to avoid any failure that could result in business impact.
Disaster Recovery — Whether your infrastructure is hosted on-premise or in the cloud, there is always a risk if something fails you need to quickly need to get your servers to recover from that disaster. While there are backups but there is no guarantee that will get your systems back to normal, as they are not well tested and the additional time and effort it takes to make the environment back to its original state just add to the impact.
Ability to Rollback — Similar to application rollback, how we can ensure we can roll back the entire infrastructure in case of any environmental issues. This also gives us the confidence to trying new experiments with a new version of the software, operation system or managing dependencies.
Operational Complexity — With the rise of distributed service architectures and auto-scaling, using traditional methods for applying patches or updating configurations across multiple instances is difficult, error-prone and time-consuming.
Solution — Immutable Infrastructure
What Is Immutable Infrastructure?
An immutable infrastructure is an approach in which we replace the application and infrastructure instead of updating it. All the modules that are required to build an environment are replaced with every new deployment, rather than being updated in-place. This includes everything that is required to have the application up and running Eg: Environment configurations, infrastructure configurations, build artifacts, etc.
The entire deployment is done by building new images, history is preserved automatically for rollback if required. The same process and automation that is used to deploy the next version can also be used to roll back, which ensures the process of rolling back will always work.
A decade ago, these challenges were difficult to overcome, but with the disruption brought in by s/w defined infrastructure gives us the flexibility to eradicate these issues.
Let me explain this by sharing requirements from one of our recent implementation and how building Immutable Infrastructure helps us resolve all our issues and achieve our goals.
- How we can simplify our operations. The solution should not add to the complexity and should not be people dependent.
- How to enable frequent releases without worrying about dependencies.
- How to enable auto-scaling to avoid outages during peak hours.
- How to certify all the configurations across all nodes are synced at the same time.
- How to ensure consistency across all environments and exact configurations and artifacts is deployed from DEV to PROD.
- How quickly we can respond to new security threats.
Tools and Platforms Used
Redhat Openshift – We used the Openshift Environment (OSE) as our Platform as a Service (PAAS) platform which provides the following key features:
Easy deployment and scaling.
Container management via Kubernetes.
Docker – We used Docker to build, ship and run containers. It is the world’s leading software container platform and helps to eliminate “works on my machine” problems.
IaaC (Infrastructure as a Code) – We developed Infrastructure as a Code (IaaC) which is used to create and set up a new OSE environment in a couple of minutes.
Jenkins 2.0 – Jenkins is the most widely used CI/CD tool used across DevOps projects worldwide. We are using this to integrate IaaC into our deployment pipelines, which as part of the application deployment process replaces the entire infrastructure with a new one.
- Base docker images: Base docker images for every application that will have all the libraries, OS, software’s installed.
- Private Docker registry: Registry server to store all the base docker images and new docker images post the build process.
Openshift (Key Layers)
At a minimum, you need the following to run your application (pods) in your OSE environment.
- Route – An OSE route is used to expose a service to the outside world so that the external world can use it to access your application.
- Service – Think of service as an internal load balancer. In case your application is running multiple pods, service helps to manage traffic between the pods and internal communication between different applications in an environment happens through service.
- DeploymentConfig – A DeploymentConfig is the master configuration file for an application that defines and manages all the configurations, resources, limits, image information, number of replicas, etc.
- Imagestream – This is used to reference or get an image for a mentioned image stream and image name.
Our IAAC framework consists of the following:
- Templates – All the above-mentioned layers (route, service, deploymentconfig, imagestream) are YAML based configuration files. Therefore, we created templates for all these and all our applications.
- Configurations – We created an environment and application-specific configuration files per application per environment with key-value pairs.
- Ant scripts – We created our build framework using ant scripts which on execution uses the configurations and replaces the templates with original values using key-value pairs. On successful execution, we get route, service, deploymentconfig, and imagesteam with updated values for our applications.
We use the Pipeline as code for our deployment pipelines. It has two parts:
- Build process – As part of the build process, base docker images are pulled from the registry server and on compilation created new docker images for our application, which goes through all the scanning, analysis and validation.
- Deployment process – As part of the deployment process, deployment pipelines call IaaC framework that creates new YAML files for route, service, deploymentconfig, and imagestream from the templates. Post that it deletes all the existing configurations for application in the environment and replaces with the newly built configurations that are making our pods, our environment immutable. The deployment is happening using the rolling update strategy.
Let’s understand how Immutable Infrastructure helps us meeting all our client’s requirements and benefits us all who are involved in the entire application lifecycle:
1) Simplifying operations — Now we no longer have to keep track of what is the current state of pods running on the environments before deploying a new version. Now we no longer have to worry about if our pods are in a broken state, as we do not have to fix them.
2) Frequent releases — As part of a continuous delivery process, deploying the application via immutable infrastructure has become one of the best practices for application release. Integrating IaaC with deployment pipelines gives us the ability to recreate the entire environment in minutes. This gives the facility to rollback and push deployments to production in a couple of minutes and has dramatically increased confidence for the entire team to test deployments without fear.
3) Auto-scaling — Container management is the key feature in OSE and gives us the ability to quickly spin up new application containers during peak load.
4) Manage configurations — As part of the application deployment process since all the pods are created at the same time, we don’t have to worry about whether configurations are synced across all nodes or not.
5) Consistency — Since we build the image only once and deploy the same image from DEV to PROD, we can trust the systems to work and behave the same way in each environment.
6) Security — It’s easier to respond to security vulnerabilities as we are now familiar with replacing all our images in any environment so all we have to do is to update our base image and trigger the deployment pipeline.
I would like to summarize this blog by stating that now is the time for every organization to build its infrastructure as immutable. Immutable infrastructure truly disrupts the way applications are deployed in this space as it can increase reliability, efficiency, robustness, and consistency within a deployed environment so that environments can be recreated in minutes.
Opinions expressed by DZone contributors are their own.