What Makes Testing in Agile Successful?
The advent of agile software development was a watershed moment for testing. In order to move fast, things needed to be automated, a fundamental shift in the role of the tester.
Join the DZone community and get the full member experience.
Join For FreeThe following article is a guest post to Zephyr from Hans Buwalda, CTO, LogiGear. LogiGear is a partner of Zephyr and it provides leading-edge software testing technologies and expertise, along with software development services that enable our customers to accelerate business growth while having confidence in the software they deliver.
Agile is one of the greatest developments in the history of computing. It made a 180% turn from what was commonplace in software development before.
One of the effects of Agile was a shift in the vision on testing and test automation. Where traditionally it was up to discussion whether to automate tests, Agile projects, in particular when they also involve DevOps, rely heavily on automation. It is seen as a must-have for most of the tests. An exception is "exploratory testing", which focuses more on the, effective, involvement of manual testers exploring/learning about an application and that way identifying possible issues.
The most common and successful form of test automation is unit testing, something that is usually done by developers. In essence a developer will write a test function for each function in the application under test. By nature unit testing is highly automated, and can usually be maintained very easily to accommodate changes in what is being tested.
The fun starts with testing that exceeds the individual functions, often referred to with names like integration testing and functional testing. In particular when the tests need to interact with the UI (User Interface), their automation tends to be complex and sensitive to changes.
The two main recommendations are:
- organize and design your tests well
- maximize cooperation in the team
Organize and design
While much attention is paid to the functional and technical design of an application, tests tend to take a backseat. I write about this a lot. If tests are poorly designed, their automation is not going to be a success, even if the automation engineers involved are very skilled. That puts the ball for test design firmly in the court of the testers, who in turn might not always have a design background. In Action Based Testing, we describe how modularization and keywords can be used to make tests more easy to create and understand, and also easy to automate and maintain. Our TestArchitect product is designed to optimize the use of this approach.
The main point made in ABT is that tests be structured in "test modules". These modules are like the labeled buckets where the teams will put their tests into. Even though in a typically agile project most of application development and testing work is done in sprints, I do recommend to establish the buckets early. This is not a task that takes much time, in particular if you make a good split between "business tests" and "interaction tests". The business tests are the realm of domain experts. They will for example verify the calculation of a mortgage premium, and will not show detailed navigation actions. Interaction tests are to verify whether a user (or other system) can interact the application. Checking whether a button works or if a menu has the correct items are interaction tests. Both categories can be equally important, but should live in different test modules. See this template for test modules.
Maximize cooperation in the team
Cooperation can make a significant difference in testing and automation success. The product owners, and more broadly domain experts and expert users, can help a lot in identifying and understanding what needs to be tested. For the business level tests this can often be done surprisingly early in a sprint. Not knowing the details of the interaction at the point in time is actually a bit of an advantage. The testers will not be tempted to include include interaction details into the business tests, something that all too often happens in projects, making tests less maintainable. The testers in turn, with their unique QA perspective can clarify much functionality early. Where developers often will ask "what needs to be developed?" testers will (should) ask "what can go wrong?"
Developers can help in many ways, most importantly helping making an application "testable". A good architecture with well-defined components, tiers and services can help testers structure their tests and automation well. Defining invisible identifying properties for UI or Web element like "id" or "name" will make it easy for automated tests to interact with them, and will make the impact of UI changes on the automation manageable. And it can make a big difference for automated tests if developers provide "white-box" access. Examples are hooks with which a test can see if a control is ready or what data is being displayed in chart.
A key success factor to make cooperation work well is to keep the QA in a team in sync with the rest of the team. If QA members of a team still have to worry about the design or automation of tests from past sprints while the rest of the team has moved on, cooperation becomes harder, which in turn will lead to even more tests not "done" at the end of sprints. Apart from the recommendations above, some form of outsourcing can help greatly. LogiGear testing and automation engineers for example can do much of catchup work, while the QA keeps ownership but also stays in sync. They can also provide help in related activities like managing execution tests on multiple environments. The workforce can be varied over time based on what a collection of agile teams need from sprint to sprint, providing a rapid response that in itself is agile as well.
However you do it, paying attention to how tests are organized and designed, and how the various players cooperate, can contribute greatly to the success of the automated testing in agile projects.
About Author:Hans Buwalda, CTO of LogiGear, is a pioneer of the Action Based and Soap Opera methodologies of testing and automation, and lead developer of TestArchitect, LogiGear’s keyword-based toolset for software test design, automation and management. He is coauthor of Integrated Test Design and Automation, and a frequent speaker at test conferences.
Published at DZone with permission of Francis Adanza. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments