Smart Continuous Delivery Using Automated Testing
Smart Continuous Delivery Using Automated Testing
Join the DZone community and get the full member experience.Join For Free
In response to accelerated release cycles, a new set of testing capabilities is now required to deliver quality at speed. This is why there is a shake-up in the testing tools landscape—and a new leader has emerged in the just released Gartner Magic Quadrant for Software Test Automation.
[This article was originally written by Vincent Riou.]
This post is building on a recent post highlighting recommendations ideas on how to simplify your unit testing by using the right set of tools (Smart Integration Testing with Dropwizard, Flyway and Retrofit).
As a company like Logentries rapidly grows, and the number of product features increases, an important question arises around maintaining the highest level quality and user experience. That level is usually where the company manages to deliver new capabilities regularly to its user base without impacting legacy features. It also should ensure that when issues are identified they can be managed and resolved quickly and definitively.
I believe that the shortest way to achieve this comes from using automated testing, continuous integration and code-quality control tools. Such tools will help you identify issues early and deal with them before they reach the later stages of your deployment cycle/pipeline. This timing is very relevant since we all know that the later an issue is identified in the cycle, the more costly it is to deal with. Another benefit of writing automated tests for your software as part of Test Driven Development is that these tests can also be run as part of later release cycles to indicate regressions.
In this post, I want to highlight three steps that can be very valuable in your release planning process.
The idea with acceptance testing is to write tests (or behavioral specifications) that describe the behavior of your software in a language which is not code but is more precise than standard english (which you can use in your requirements/stories) . Additionally, those tests can be run using a parser which will allow you to easily match your language to functions in the programming language of your choice. What is very important here is that those tests are written first and used to drive the rest of your development process (drive unit tests and therefore coding). It is also worth noting that a good user story is testable with acceptance criteria; the role of acceptance tests being to confirm the story was implemented correctly.
The overhead introduced by adding acceptance tests can initially seem high (versus unit tests only), but the benefits introduced are potentially huge. Here are some of the benefits that acceptance tests give you:
- Identify missing elements in your requirements/stories early (before implementation is started) by realizing that without them, you can’t describe the expected behavior (new error message, new setting, etc)
- Provide traceability from detailed requirements to test, as these acceptance tests are your detailed requirements.
- Identify early if a new feature introduces behaviors that are conflicting with a legacy one.
- Avoid regressions on the features that you’ve already described (once your acceptance tests are running as part of your automated release process)
- Reproduce, fix and verify bugs faster using your acceptance test vocabulary (and once that test is fixed, it automatically provides you with a regression check against that bug for all future releases)
- Refactor your code without worrying about side effects, e.g. forgetting to handle a code dependency (if all your tests pass, your software is fit for purpose). This includes changing elements that can be seen as key like changing a 3rd party software library for another, switching database implementation or even changing programming language (which would mean losing your unit tests for example).
- Non developers can read the acceptance tests, even possibly modify and run them if needed. This can be very helpful for your support or product management team.
- Reuse the functionality built in acceptance tests for smoke testing, testing a production setup or even performance testing
- Increased productivity as reduces the amount of time-consuming, error-prone manual testing
Now, that we’ve seen the benefits, let’s write a very simple acceptance test using Gherkin to get a feel for it. The purpose of this test is to fully describe a well known program.
Feature Say Hello to the World!
Scenario: Hello World!
When I start my software
Then it should have said ‘Hello World!’
Once this is written, I have determined the boundaries of my software. A simple Cucumber implementation of these Gherkin steps followed by the correct set of unit tests in the language of my choice and I will be able to start the Hello World! implementation.
At any stage now, I could decide to choose another programming language and the only change to my acceptance test framework would probably be the implementation of the ‘When” step which starts my program.
Now that we have a Hello World software which is fully described by acceptance tests (I suspect you’ll quickly have more than 1 test in your suite), we need to make sure these tests are used wisely.
The first step is to ensure that developers use them to drive which unit tests are needed and therefore what gets implemented (see Test Driven Development practices).
The next important step is that those tests need to be run by your continuous build tool (for example, in Logentries we use Jenkins). Since you’re trying to catch the failures as early as possible, it is a good idea is to catch them before they even make it to your staging branch. For example, if you’re using Git, you would make sure you trigger a Jenkins build for every merge request and have Jenkins vote +1 if all the tests pass or -1 if there are failures. This way, you would keep your staging branch regression free and therefore always in a state where you can push to your staging/pre production setup and be ready for your manual/visual/creative testing phase.
Static Code Analysis Tools
Just a quick word about these tools since they work on an aspect of your code that sometimes cannot be caught by functional testing (acceptance, unit or other). They are key to help with the readability and maintainability of your code by enforcing best practices and also pointing out potential failures in your code structure. Examples of such tools being used in Logentries include pylint, jshint, Cobertura and Sonar.
One of the benefits is that they help your team write code in a consistent style with a number of checks to ensure recommended practice and coding standards are being followed. This is helpful for every developer in that they get the same code layout and basic level of quality from the rest of the team as they write and allows greater focus on other issues during code review.
I would suggest that you integrate one or more such tools in your continuous building tool and start setting levels of failure. The simplest is to set the level so that the values never go up so each build is at least as good as the previous one. You can then work with your team to make those values go down at a speed that makes sense to you. In no time, you could be working with very nice, readable, standardized code and this can only add to your releasing speed and confidence.
Hopefully, you should be either getting ready to try out some of these practices or if you’re already using them, Im hoping they are already making your job a lot easier!
Published at DZone with permission of Trevor Parsons , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.