How Agile Changes Testing (Part 2)
How Agile Changes Testing (Part 2)
The second part in Johanna Rothman's series on agile testing.
Join the DZone community and get the full member experience.Join For Free
[Latest Guide] Ship faster because you know more, not because you are rushing. Get actionable insights from 7 million commits and 85,000+ software engineers, to increase your team's velocity. Brought to you in partnership with GitPrime.
In part 1, I discussed the project system of agile. In this part, I’ll discuss the need for testing documentation.
In a waterfall or phase-gate life cycle, we needed documentation because we might have had test developers and test executors. In addition, we might have had a long time separating the planning from the testing. We needed test documentation to remember what the heck we were supposed to do.
In agile, we have just-in-time requirements. We implement — and test — those requirements soon after we write them. When I say “soon,” I mean “explain the requirements just before you start the feature” in kanban or “explain the requirements just before you start an iteration” if you timebox your feature development. That’s anywhere from hours to a few days before you start the feature. We have a cross-functional team who work together.They provide feedback in the moment about the features in development and test. The most adaptable teams are able to work together to move a feature across the board.
In waterfall or phase-gate, we might have had to show the test documentation to a customer (of some sort). In agile, we have working software (deliverables). What you see and measure changes in agile.
In waterfall or phase-gate, people often defined requirements as functional and non-functional. I bet you have read, “The system shall” until you fell asleep. In agile, we often use user stories—or something that looks like it — to define requirement via persona, or a specific user. We know who we are implementing this feature for, why they want it and the value they expect to receive from the feature.
You’ve noticed I keep saying, “implementing” in these posts. That’s because for me, and for many agile teams, implementing means writing code-and-tests that might ship and code-and-tests that stay. The developers write the code and tests that ship. The testers write code and tests that stay.
The developers and testers are part of a product development team. They may well be attached to their roles. For example, when I am a dev, I don’t test my work nearly as well as I test other people’s work. Not all devs want to write system tests. Not all testers want to write product code. That’s okay. People can contribute in many ways.
The key is that developers and testers work together to create features. They need each other. What does that mean for test planning?
- You might need something about the test strategy in the project charter. I often recommend that the team think about scenarios they want to test every day as they build.
- You might need guidance for the testers: “We use this pre-configured database for this kind of testing.” Or, “We test performance once a day with pre-specified scenarios,” or whatever you need as team norms.
- You do not need to separate the test planning from the testing. You might decide to automate tests you will use over and over. You might decide to explore/use manual testing for infrequent tests. This is a huge discussion that I will not delve into in this series. Make a conscious decision about automation and exploration.
I have yet to see a humongous document of test cases be useful in a waterfall team. That’s because we learn as we develop the software. The document is outdated as soon as the requirements change even a little. If you need to document (for traceability) test cases, it’s easy to document as you write and test features.
This changes the testers’ job. Testers partner with developers. They test, regardless of whether the code exists. They can write automated tests (in code or in pseudo-English) in advance of having code to test. They provide feedback to developers as soon as the test passes or fails.
When I was a tester, I checked in my code and told the developers where it was so they could run my code. I then explored the product and found nasty hairy defects. That was my job. It wasn’t my job to withhold information from the developers.
The testers’ job changes from judging the quality of the product to providing information about the product. I think of it as a little game: how fast can I provide useful feedback when I test?
This game means that as a tester, I will automate test cases I need to run more than once. I will automate anything that bores me (text input, for example). I will work with developers so they create hooks into the code for me to make my test automation easier. I have this conversation when we discuss the requirements.
Just as the stories and the code describe the stories, my tests describe my test cases.
In agile, we use deliverables — especially in the form of running tested features — to see progress throughout the project. Test cases lose their value as a deliverable.
Published at DZone with permission of Johanna Rothman , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.