Making the Case for Complete CI/CD Automation
Making the Case for Complete CI/CD Automation
If you're considering implementing or refocussing your CI/CD, these survey results from high and lower performers can help you get started.
Join the DZone community and get the full member experience.Join For Free
Containerized Microservices require new monitoring. See why a new APM approach is needed to even see containerized applications.
Where are you with CI/CD? Chances are, you're probably asking yourself one of these questions:
- If you’re not already doing CI/CD, should you start now?
- If you’re already doing CI/CD but it’s not living up to your expectations, should you refocus resources to make it a priority?
In this post, we’ll explore some of the key performance indicators from both developers and managers who are leveraging continuous integration and continuous deployment to accelerate the process of building and releasing software.
While some already understand the benefit of CI/CD and automating “all the things," we’ve collected a compelling set of metrics to analyze how the DevOps and automation movement is impacting the software development life cycle (SDLC). Hold tight while we digest findings and statistics from studies published over the past 3 years.
Continuous Integration/Continuous Deployment
David Farley, co-author of Continuous Delivery, perfectly sums up CI/CD. He writes “Releasing software should be easy. It should be easy because you have tested every single part of the release process hundreds of times before. It should be as simple as pressing a button. The repeatability and reliability derive from two principles: automate almost everything, and keep everything you need to build, deploy, test, and release your application in version control.”
CloudBees partnered with Hurwitz and Associates to survey 150 IT decision makers across a wide range of industries, including technology, manufacturing, financial services, education, healthcare, retail and several more. More than 50% of respondents reported they were using continuous integration company-wide, and slightly less than half of the same respondents reported they were utilizing continuous delivery, which is the process of automating the release and deploy stages of the DevOps Infinity Loop, alongside continuous integration.
Just because many organizations are already using CI/CD doesn’t mean that they are successful. The “State of DevOps” report has an interesting way of modeling the data for CI/CD success indicators. They break the categories down into High Performers, Medium Performers, and Low Performers. Within each of those categories, they determine which parts of the CI/CD process are still done manually.
CI/CD Differentiators for High/Medium/Low Performers
|Survey Questions||High Performers||Medium Performers||Low Performers|
|Deployment Frequency||On Demand (n per day)||Once per week/month||Once per week/month|
|Lead Time for Changes||< 1 hour||> 1 week and < 1 month||> 1 week and < 1 month|
|MTTR||< 1 hour||< 1 day||> 1 day and < 1 week|
|Change failure rate||0-15%||0-15%||31-45%|
The biggest differentiator here is the MTTR for High Performers vs. Medium and Low performers. High performers reported taking less than 1 hour when identifying and resolving service degradation. This likely indicates they have the right tools in place to quickly diagnose and determine the root cause of incidents.
While the change failure rate seems to be the same between high/medium performers, keep in mind that high performers are pushing significantly more changes to production, all the while keeping their failure rate the same as the medium performers. While monitoring certainly plays a major role in this, having a solid testing and release pipeline gives high performers the advantage of being able to increase release velocity while keeping the change failure rate low.
Percentage of Application Delivery Work Still Performed Manually
|High Performers||Medium Performers||Low Performers|
|Change approval process||48%||67%||59%|
What’s apparent from these numbers is that organizations which implement some measure of automation in the software development lifecycle see meaningful results. The KPIs tell a compelling story. The organizations that automate more, simply perform better; more releases, less lead time for change, less MTTR, and failure rates plummet.
In a report fielded by GitLab in 2018, 60% of high performers reported that automating more of the SDLC was a priority in their organization. While it’s not conclusive, 38% considered their organization as “failing” DevOps, I’d suggest organizations which make the effort to improve the automation around the SDLC will see a correlation with their developers having a more positive outlook on the state of automation.
The Role of Observability in CI/CD
The Atlassian survey made several interesting observations, nearly two-thirds of respondents reported that they actively monitored applications and infrastructure, while roughly half monitored transactions and user-experience. Nearly one-third reported that their monitoring solution did not make them aware of potential issues before users were impacted, which correlates to the number of respondents who indicated that customers were the first to report service impacts.
In the Developer Survey by Gitlab, low performers reported that 58% of developers were unsure of how their changes affected the performance of their application(s). In this same report, only 42% said that developers in their organization used monitoring tools. This is in stark contrast where high performers reported that 62% felt their developers used monitoring tools and understood the impact of changes made to their applications.
I believe that while organizations have application and infrastructure monitoring in place, many of them don’t monitor what really matters: transactions and the user experience. This is evident by the number of organizations who wait until their customers report service degradation or outages. Why aren’t companies monitoring these critical pieces of information? Is it because this data can be overwhelming, hard to understand or instrument? You may find some answers to these questions in a colleague's article on how CI/CD needs Continuous Monitoring.
We’ve covered automation thoroughly in this post, but so far none of these reports asked how much time companies spend defining and configuring alerts, or how much effort engineers expend instrumenting their applications. Has it been conceded that engineering must define every alert, configure every collector, and instrument every bit of application code that should be monitored? Surely, there is a solution for automating all of these tasks so that manual monitoring processes do not slow down the CI/CD pipeline.
Here’s to Continued CI/CD Success
Based on the metrics above, an area where we’re primed to see disruption is change management. This was identified as the least automated processes by both high and low performers. Organizations tend to have proprietary processes and stages for change management but I predict with all the automation frenzy going on that we will see leaders emerge to tackle this problem head-on.
The drive to build, integrate, and continuously improve CI/CD systems is irrefutable. It’s no longer a secret, automation is critical for organizations to become and remain competitive in today's market. We will continue to see improvement with the flexibility in which widely adopted solutions can integrate together. This can already be observed in the wild with Jenkins X, a CI/CD tool which was specifically built to integrate with Kubernetes.
Published at DZone with permission of Kevin Crawley , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.