The importance of achieving continuous quality visibility during Continuous Integration is growing these days. Release cycles are getting more frequent, there is much more to test in order to meet today’s coverage metrics (did anyone say “DIGITAL”?) and the number of test cases is also growing (service logic/mechanism is built to scale and requires us to test more components).
Walking the Agile path, we are required to calibrate our efforts to stabilize everything during a sprint cycle. In simple words: we need to decrease risk levels during a shorter SDLC towards release.
Do Something Meaningful With Your Quality Insights
One of the main challenges organizations are facing today is the need to fit the quality feedback loop cycle into this high-speed race. “Fail Fast – Fix Fast” is not a cliché anymore, but a desired state for teams who are looking to find bugs early enough in the process in order to increase efficiency and ensure optimized user experience to their users. The equation is very simple: if you run tests just for the sake of running tests, you’re wasting time and money. You need to do something meaningful with the test results that you get (and also talented people worked hard to build sustainable automation tests…). Many teams merge code that contains defects in order to keep up the pace of continuous integration: “We run specific builds multiple times a day when we get closer to release. We cannot analyze thousands of tests in such a short time frame. So even if some builds fail, code is still being pushed to Master branches (and hopefully/usually things will be found in Regression cycles later on…).”
Visibility Is a Big Expression – What You Really Need Is Actionable Data
Visibility is all about taking “Go/No-Go” decisions based on an educated basis – quality insights. A senior DevOps Manager that I met recently told me that in many cases, he feels that his teams are shifting left, but bugs are shifting right.
The long stabilization phase, known from Waterfall practices, is being replaced in continuous integration with a managed process. “Continuous” means that we keep asking ourselves, “is this thing going in the right direction?” with literally every new piece of code (just like using building blocks for a house- each brick is relying on the previous one). If things are breaking for some reason, we can quickly take a decision and revert to the last point they were stable and take it from there. So while we keep building hopefully a solid code structure, we need to have the right visibility in motion at each and every decision point.
Green, Green, Green, RED! Trends Mean Something in Continuous Integration – Listen to Them!
As a manager, I’d first like to know whether my latest build passed or failed. At that point, I would usually compare the build status to the previous executions (remember? We’re talking about a continuous process – step-by-step). But wait for a second, how can I know that? Do I have to review all my builds one-by-one and manually track their history? This is not scalable.
The common script structure will include a “try and catch” mechanism that helps us to highlight exceptions very easily. The same concept should be adopted here: if you were doing something good (writing stable code) all day/week long, but skipping your afternoon coffee caused some focus issues (and potentially some bugs in your code) – you should either drink more coffee, go home earlier (and know your productivity limits) or more seriously get back to last stable position of your build and take things back from there.
Stabilizing your builds on the green side (“Pass”) should be always the top priority. In order to do that, you also need to know what is failing your tests and isolate that.
This is easily said, but tracking this per multiple builds (that, in some cases, run a dozen times per day) on multiple digital platforms requires some computing power.
If you have spare time or an unlimited budget, you’d probably build your own dashboards. But, if you still read these lines, maybe the result isn’t good enough or the process is hard to maintain. The good news is that help is on its way. The less-good news is that there are a few more challenges.
My Build Failed – But Why?
Once your build failed, you were fortunate enough to get a Slack notification regarding that and you are now officially a member of the early defects identifiers club.
As you’re still required to run as fast as you can and continuously speed-up, you need to find the needle in needlestack. Let’s say that your build is running a responsive web design test case over seven different digital platforms – can you specify QUICKLY enough what has failed your build? Is it a flaky test? Maybe an unstable environment (backend) that couldn’t be reached? A new browser version that is not fully covered by your service? Or maybe your iOS object locators weren’t defined properly?
At this point, we need to differentiate between recurring or standard continuous integration iterations like smoke tests or short regression cycles to builds that are testing new functionality during development (mainly unit tests). Some tests cannot suffer the Red color; in the case of a failure, you either fix the bug immediately or skip/comment out the problematic test if it is not important enough (as the continuous SUCCESS of critical builds is above any new functionality).
The clear objective when a failure occurs in continuous integration is to eliminate the false negatives ASAP. You can use a retry mechanism on your FW, check device-state related issues including network, app status, and device recovery.
As a last resort, redundant devices can be also used in order to eliminate the false negative which is related to the device itself.
Continuous Integration Release Cadences Are Solid-Rock – How Can I Assure I’ll Deliver on Time?
Yeah, yeah, I get it. You release on a quarterly basis, can you be more predictable than that? My answer is – YES.
Let’s say that you do release once a month and everything is known upfront, but what happens if a major bug is found in your service and you need to release a patch to production? How long will it take you to run a full regression suite? Can you state the exact duration of a smoke test running across your entire digital lab?
By sizing the major quality process and tests that are running on your digital lab, you can actually plan an optimized SDLC. You can be more accurate about deliveries (“are we about to release on time or not?”) and plan dedicated slots to test new functionality (instead of doing that on free-space basis).
Finally, by making a deep dive a part of a manageable process, maybe you’ll find out that your regression cycle can be shortened (as the process is much more effective). This will create room for additional development and testing of new functionality.
Perfecto’s DigitalZoom Reporting introduced a new Continous Integration dashboard for optimized quality management of your DevOps. All the above-mentioned challenges are addressed easily without any hassle; the dashboard provides an opportunity to have real-time tracking of your build’s quality status and trends, and perfectly fits into any automation at scale.
Continuous integration users can view an aggregation of test results status identified by job name/build number as generated by their continuous integration (CI) tool.
The dashboard does not require any special implementation and only requires the identifiers for Job Name and Number (Build).