Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

In Rapid Software Testing, Nothing is Obvious

DZone's Guide to

In Rapid Software Testing, Nothing is Obvious

Believe none of what you see, and none of what you test. Bring your strongest magnifying glass to this post to make sure you don't miss a thing.

· Performance Zone ·
Free Resource

Sensu is an open source monitoring event pipeline. Try it today.

Image title


I found the timing of this text from my 10-year-old son amusing, as less than 24 hours earlier, I'd completed Michael Bolton's intensive "Rapid Software Testing" (RST) course, which he recently gave at Tricentis' HQ in Vienna, Austria. I left the final day of that inspiring course wondering, "Maybe I should be a tester."

Armed with three full days of exercises, lectures, heuristics, analyses, and questions that challenged our entire class' idea of what software testing is and isn't, I chuckled at the adorable test my son drew up for me.

I quickly typed out "Top row, 4 th column. BAM!" He replied:

"But there were two."

I then instantly spotted the second "bug" that I'd overlooked the first time, and asked myself, "What did I actually learn if I'd failed to apply any of it at the first opportunity that came my way?" Looking back over the 17 pages of notes I'd taken over the RST course, I was able to give myself a bit of a break when I noticed that one of the last things I'd jotted down was:

We can, and should, become better testers every day.

Bolton spent the week sharing insights that help us make this a feasible-and rewarding-endeavor. Here's a recap of some of the course's most powerful points.

"Everything is obvious when you know something already. Nothing is obvious when you know nothing"

When Bolton told us this at the very beginning of Day One of our class, I had an admittedly "Yeah, so?" reaction. I assumed this statement meant that the benefit of "knowing" something about a product would make everything obvious to a powerful software tester. But by the end of the course, especially after failing to spot the "obvious" bugs my son planted into his text, it hit me that almost the entire three-day RST course served as proof points of the importance of these two, doubled-edged lines.

If testers quickly judge a piece of software as "simple," or nearly identical to an app they've tested or used before, they're inadvertently allowing past experiences to influence their decisions about what to test, and what not to test. By relying on what they think they know, whether that assumption comes from personal experience, or what they've been told or even promised, testers run a serious risk of limiting the scope of their exploration and letting bugs remain in a product.

When faced with the above test from my son, I fell, quite spectacularly, into every trap that Bolton had urged us to steer clear of:

  1. I pre-judged the software's creator, telling myself, "He's only 10, and, being my son, I kinda know everything about him. This will be easy."
  2. I only looked for one bug...because he said there was only one bug.
  3. When I found a bug in less than two seconds, I reached a premature definition of "done" - with more than enough time left to test, as I'd been given no deadline.

My poor testing was a blinding example of the complete fallacy of "everything is obvious when you know something already." Everything I thought I knew suckered me into failing this simple exercise, and by being a bit more skeptical, curious, and investigative-as great testers are-I would've greatly increased my odds at passing this test.

Confusion Can Feel Uncomfortable, But It Can Also Be A Powerful Tool

It's safe to say there were numerous times during our RST course that confusion filled the room. Bolton asked us to test painfully simple-looking apps with vague instructions like, "This app should give you the answer '4' when you type in '2 + 2'...you have 10 minutes...go." We were asked to make a mind map for all the qualities of a wine glass. We were even told to play an infuriatingly vague dice game where we had to guess the correct pattern and follow a set of rules that we were "welcome to break," but then told to follow when we attempted to break them.

At the start of each one of these exercises, I was thrilled to see I wasn't the only one scanning the room for the comfort of finding other completely confused faces. But as Bolton described, it's this confusion that results in incredibly valuable approaches to solving incredibly difficult challenges: finding problems-or even what only might be problems-buried deep within the software in front of us. Or, in some cases, these problems may not be hiding at all, but are invisible behind a tester's preconceived ideas of where they're "supposed" to look for them.

Our initial responses to the confusing and uncomfortable exercises given on Day 1 were to often stare unproductively at our screens, hoping something would come to us. But, on Day 2 and 3, when these exercises had grown even more confusing and complex, this silent uneasiness had vanished. We eagerly embraced that confusion by asking great questions to learn more about the application and its users. We sought out the ideas of others in the room, and we formed groups made up of numerous skill sets and backgrounds. And, perhaps, most importantly, we allowed ourselves to wander-with purpose-into areas of an application we hadn't thought to explore on our own earlier in the week.

Successful Stumbling vs. Intentional Probing

Bolton and James Bach (RST's original developer) teach RST as a "mental discipline which applies to anything," and part of that discipline is a clear understanding that strict, unbreakable rules are far from what results in great software testing. "So much of testing is finding things by accident," said Bolton.

This statement, like each of those highlighted above, was brilliantly revealed in multiple exercises throughout the course. I accidentally double-clicked a button that only needed a single-click...and it broke. My partner's finger slipped, hitting a 4 instead of the 3 we were supposed to be testing...and we got an error. Our class accidentally re-tested a function that worked fine the first time...and failed the second time. We repeatedly discovered problems through what Bolton described as "successful stumbling," where performing only "intentional probing" would've let the problems slide right into production.

In closing, my fear of trying to condense the highlights of Bach and Bolton's Rapid Software Testing course into a single blog was that I would leave readers asking, "Is that it? Three highlights from over 24 hours of instruction?" But then I found one last piece of advice Bolton reiterated throughout our time together:

A tester must seek the underlying complexity that lies behind the illusions of simplicity. A tester's job is to gather evidence and draw inferences, not to believe what he/she is told.

Everything above is only one person, giving one opinion. So, ask questions, and be skeptical, because nothing should be obvious yet.

Sensu: workflow automation for monitoring. Learn more—download the whitepaper.

Topics:
bugs ,testers ,rapid software testing ,michael bolton ,quality ,software quality ,james bach ,exploratory testing

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}