Give some badly broken software to a neophyte, ask them to “play with it”, and they will likely be able to find some bugs. Sadly, you won’t know if those bugs matter, if they are the right bugs, or how well the tester covered the application.
One classic “solution” to this problem is to get people to run the same scripts every time, but scripted tests introduce other problems, enough that we recommend something else: Train the testers.
That brings us to the second problem. Testing is a skill.
Imagine trying to learn to play Golf, or Tennis, or Basketball in a day or two, in an intense bootcamp.
Don’t get me wrong; we like bootcamps, and run them. A new golfer can learn the rules, while players that know the rules can shave a few points off their game. The weekly golf game has pressure, which makes it hard to step back and look at how things went. Most of software development, like competitive sports, is performance. It is possible to benefit from performance, but it is not nearly as effective as deliberate practice. Shaving those few points off the game might take fourteen months … if it happens at all.
Recommending training is a small conflict of interest for me. Excelon gets about 10% of it’s annual revenue from training, perhaps a little more if you count conferences. We also put our wallet where our mouths are; this May Justin and I are flying to Oracas Island, off the Washington coast, to attend James Bach’s Workshop on Reinventing Software Testing.
But we can’t train all the time.
So let’s talk about thee steps of tester development, and how to get there.
Wow, There Is More to Being an Exploratory Tester Than I Thought
My friend Robert Sabourin, the adjunct professor of Software Engineering at McGill University and Consultant at Amibug, once told me the main benefit of the big conferences for new attendees is simply to realize how much there is to learn. That includes learning that simplistic approaches to testing fail – that there are more sources of problems than “programmer screwed up” and “unclear requirements.” Also that complete testing is impossible (podcast) (video).
That means terms like “it works” are dangerous. If we are aware of this, we realize the promises exploratory testing can keep – and those it can’t. The Association for Software Testing’s Black Box Software Testing Foundation course adds on the mission problem – knowing what role test is asked to perform on this project, today, and the oracle problem – having tools to identify and define problems as problems, especially when testing without a map.
The Rules of the Game
Once we realize that complete testing is impossible, we come to the selection problem – how do I find the most powerful test ideas, that will give me the most coverage, in the least time?
This is where test design comes in. A one day class can introduce a half-dozen techniques, from state transition diagrams to pairwise testing to decision trees – covering a book like Lee Copeland’s Practitioner’s Guide to Test Design. Or, perhaps, cover how to report bugs so they are easy to understand, or document what we tested with minimal coverage.
And of course there are techniques on how to slice and dice the test work, prioritize it, assign it to different testers to prevent (or guarantee!) overlap. This is the time when people tend to learn tools, and how they are used in the workplace.
Once we get past the process, there is the long, hard road of application – of learning to do the work well.
Another colleague of mine, Dwayne Green he holds a one-hour session with the testers in his company each week. The team might spend an hour discussing the chapter of a book, running through a pluralsight course, working through some Black Box Software Testing Course Material, or, yes, running test exercise. That might be a bug hunt on a new product, it might be something like parkcalc, or a theoretical exercise about the pursuit of value.
Here’s what I mean by that: Find the “best” hotel in a specific area on a specific date using hotels.com or booking.com. You’ll notice that what is “best” varies depends no the customer: is it a business person? A family? How do you measure the value of an included breakfast, a heated pool, an airport shuttle? A lot of that value comes out in a back-and-forth between the tester, customer, and programmer. If you’d like a specific exercise, I haven’t booked my hotel yet for QualityJam in April in Atlanta. Pick one person to play me and another play tester, and explore the interview process — mindfully.
Good testers challenge each other to these sorts of exercises as a habit. Some teams develop formal systems, like a once a week meeting. Another way to do that is with informal methods, like desk conversations, emailing links to articles, and after work games and challenges.
Everything At Once
It’s tempting to look at my list as a three levels, like a video game. That’s not what I’m suggesting. Instead, we work on all these levels at once. As a consultant, sometimes I need a reminder of that the problem is bigger than I realize – a reminder of my own ignorance!
The important thing here is to develop a plan to develop yourself and your team. Start the professional development snowball, point it down hill, and every now and again, give it a little push.
It’s hard to cover all of skill development in an article, but I hope today we had a start, a taste, of how skill development differs from cranking out the test work.
Looking for more information on Exploratory Testing? Here are a couple other great blog posts: Is all Testing Exploratory Testing? and The Qualitative & Intangible Benefits of Exploratory Testing