Pressure rises on the team as we get closer to release. The developers are starting to forget about what is going out Friday to focus on planning the next iteration. Management could be even further ahead. The only thing keeping us from moving forward is testing... and testing isn't moving fast enough.
At the end of a development cycle, it's all too easy to see things slowing down, at least from a certain point of view.
The Three Main Things
On an average day, testers spend most of their time on three different activities — testing, bugs, and setup (TBS).
T, Testing time — is the guts of what we do, and it is also where a lot of confusion gets introduced. When we go to a scrum to talk about what we're working on, most testers report status in terms of "I'm testing the new reporting feature" or "I'm building automation for the batch loading feature from last sprint". Those statements are accurate, sure, but they can also hide all the other work you had to do. If we want to get more specific, we can reduce testing time down to only the time we spend on evaluating the software. When I am looking at documentation and talking to product people about the new changes to help design tests, that is test time. When I am working in the software, exploring and testing, that is test time.
B, Bugs — when we hit a bug, we switch from the essentially work (we have to test) to something more accidental, created by a problem. If the problem didn't exist, after all, we wouldn't need to spend the time to reproduce, to explore to see if the problem is localized or a symptom of something bigger, to document and advocate for the fix. Finding a bug breaks flow of testing; it stops the glow of work and stops test velocity, if you think of things that way. When I am testing and find something interesting, usually the first thing I do is try to recreate that situation. This is an instant slowdown in what I was doing because I have to retrace my steps. Sometimes the bug is simple and I can recreate it right away, and others the bug is clever and takes hours to figure out. After researching the bug comes the matter of reporting. Whether you are in the lucky circumstance of having a demonstration be enough, or have to do a full-fledged report in a tracking system, it isn't free. Bugs put a stop sign in front of testing activities and we usually don't know when they are going to pop up.
S, Setup — instead of creating a start-stop experience of testing like working on bugs, setup activities restrict the flow of work in the beginning like a freeway on ramp. Setup is everything I have to do before I can perform a test. In the simplest cases, I have used a tool like Excel to create data to either use with a script or to load into the software myself. This kind of set up is quick, hopefully only taking a few minutes. On the other end of the charts are setup activities that take hours or days. In one case, I had to work with a developer for a couple of days to build data and then wrap that up in SQL scripts to get a system populated with data before we could do any meaningful testing.
It is hard to get around setup cost the first time you are testing a new thing. If you are planning on retesting in the future, sometimes test management tools can help make things a little cheaper by running setup scripts on a schedule or storing special information for the next person to work in that area.
We usually don't get to pick where that time goes though, and it is almost never evenly distributed. Test Bug Setup is like a three-sided see-saw.When I spend a lot of time setting up data, there might be less time available to test and even less for reporting the problems I find. Getting these right is a balancing act.
If you want to see why testing is taking so long, take a look at all the other activities your staff are working on that aren't testing. That work is probably crucial to the project, adding information and facilitating testing, but you might be surprised to find it embedded there just below the surface.