Accelerating DevOps Adoption Using Containers
Accelerating DevOps Adoption Using Containers
DevOps done right is a powerful revenue driver. This means automation in three phases: deploy, operate, optimize.
Join the DZone community and get the full member experience.Join For Free
How to Set Up Continuous Integration Pipelines with Drone on Ubuntu 16.04. Get the DigitalOcean detailed tutorial.
A recent study by CA Technologies titled, “DevOps: The Worst-Kept Secret to Winning in the Application Economy,” revealed that DevOps delivers 18% faster time-to-market and 19% better app quality/performance. According to the study, respondents have experienced anywhere from a 14 to 21 percent improvement in business in the form of increased numbers of customers, faster time-to-market and improved quality and performance of applications. So, there is no longer any doubt that if done right, DevOps is a powerful revenue driver. But getting it right in any organization is not always easy and there are various factors to consider such as culture and organization changes, technology adoption and enhancing skills. Also, different people define DevOps differently causing additional confusion.
But, whether it is developers doing operations through automation or developers and operations teams collaborating closely to deliver applications, one thing everyone agrees on is that DevOps is about ‘automation’, both at the infrastructure layer and the application layer. In this post I will focus on application layer and discuss the requirements for automating the entire application lifecycle using applications containers.
Why Automate the Application Lifecycle?
Over the last few years, cloud adoption has exploded and Infrastructure as a Service (IaaS) has become the norm within enterprises. Developers no longer have to wait months to get access to infrastructure resources to develop and test their applications. But this dynamic, on-demand infrastructure, along with the need for agility, scale and resiliency is forcing the application architectures to change. A new architecture style, commonly referred to as microservices architecture, has emerged and is quickly gaining mindshare.
While adopting microservices architecture enables faster code deployments, and provides better scalability and resiliency compared to traditional, monolith designs, the operational complexity of the application increases substantially. A microservices style application can have several loosely coupled services that operate autonomously. As a result of increase in the number of ‘moving parts’, deploying and operating such an application in the cloud becomes challenging and requires intelligent automation.
For traditional applications as well, automation can speed up the development cycle by reducing deployment times as well as improve overall resiliency by reducing recovery times in case of failures. Investing in automation upfront enables teams to immediately benefit from the agility while preparing for possible transition towards distributed, microservices style architecture.
Containers to the Rescue
Traditionally, applications have been packaged and delivered in virtual machines. But, with the adoption of microservices architectures, application containers are becoming the de facto way to deploy services. Containers are lightweight, quick to start, and provide an elegant packaging for application services. Container engines such as Docker and rkt are gaining tremendous popularity among developers building microservices style applications. Containers are the new unit of application deployment and management, and are quickly becoming the foundation for application lifecycle automation.
The Application Lifecycle
Once an application is packaged in containers, various phases of the application lifecycle can be automated. At Nirmata, we define the application operations lifecycle in three phases: Deploy, Operate, and Optimize.
Deployment — Deploying or orchestrating a microservices style application is a complex task. Each application has different requirements and so an orchestration tool needs to be flexible and should accommodate various deployment models. Developers can leverage standalone orchestration tools such as Mesos, Docker Swarm, and Kubernetes or adopt management solutions such as Nirmata or Docker Cloud to deploy their applications. In addition to deploying applications containers, orchestration also involves provisioning of compute, storage, network, and security policies required for the application. Over the lifespan of an application, a significant amount of time will be spent deploying various services, so the orchestration should be quick and deterministic. Depending on the application, scale is another factor to be considered.
Operations — While there is a lot of focus on container orchestration, automating the on-going management of your application is equally important. Several management tasks can be completely automated:
Monitoring — Monitoring application containers provides insight and visibility into your application and is crucial for further optimization. Based on the reported metrics you can automate various tasks such as scaling up a service or provisioning additional infrastructure resources.
Alerts — Alerts help draw attention to any unexpected events within your application. For example, an alert can be issued when an application container unexpectedly terminates. Automated actions can be taken to remediate any failures.
- Activity: Tracking various system and user activities can help reduce the time it takes to troubleshoot issues.
- Log management: Automating the collection and analysis of application logs helps speed up troubleshooting any application issues.
- Upgrade: As enhancements are made to various microservices, automating the upgrade of these services is necessary. For example, when a new version of a microservice is available, a rolling upgrade is performed so that there is no interruption in services. Also, in case an upgrade is unsuccessful, a rollback should be initiated automatically.
Optimization — Once deployment and management of the application is completely automated, application performance can be further optimized through analytics and operational insights by creating a feedback loop. For example, if the number of request failures increase after a new feature is deployed, the changes can be automatically rolled back, reducing the impact. Another example is automatic provisioning of infrastructure resources in case of resource failures or increased load.
As the focus of DevOps shifts from infrastructure to application lifecycle automation, there are several things to consider. Significant agility can now be achieved by completely automating the application lifecycle. But the effort, to build, integrate, and operate tooling, could prove challenging and take up precious time and resources. The good news is that a rich ecosystem is evolving around application containers providing not only choice and flexibility but also significantly lowering the learning curve. End-to-end application automation is no longer beyond reach, but are you ready?
Is your team considering application containers? Are you adopting microservices architecture? Is application lifecycle automation a priority for your DevOps team? Would love to hear your thoughts.
If you are interested in learning more about accelerating devops by automating application lifecycle using containers, please attend the webinar: Accelerating DevOps: Automating application lifecycle using containers.
Published at DZone with permission of Jim Bugwadia , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.