Imagine you have expertise that could benefit another division in your company — a few days' work could save a few million dollars. Except the HR software you use doesn't allow your team to charge-back the other division, so you can't do the work.
Of course not. That's crazy. You'd do the work anyway and collect the bonus, right?
Except that does happen, exactly like that, all the time. Each little compromise we make to adjust to our tools has a cost — and those can add up to millions.
It shows up in testing, too. Let's talk about how, and what to do about it, for tools that automate checking and do test management.
You'd think that each time that happens, the tester would first do the check by hand, then add that risk to a list. Periodically, the team would pull the list of risks not covered by automation off the shelf, dust it off, sort by risk exposure, and invest some time and effort into reducing those risks.
Sadly, they don't. In my experience, there is no list. Instead, the scope of risks people see becomes limited to the risks the tool is capable of addressing. It is as if the tool was forcing a brain shift into what Nobel-Prize winner Daniel Kahneman calls "system one," where What You See Is All There Is, or WYSIATI. When driving a car on a long stretch of road, WYSIATI means you miss the oncoming deer — you aren't looking out for it.
In software, it means bugs slip through.
Testers who have been around will be familiar with the inattentional blindness test — if you aren't, try watching the video. In testing, we typically talk about how running the same script over and over again causes the brain to miss things that are not explicit. Yet there is another risk — a hidden pull with some test case management systems.
The Pull of Scripting
Most modern test case management tools are sort of a Swiss army chainsaw — they claim to do everything. And that is certainly possible. Yet hidden in the tool, there is probably a dominant philosophy. That philosophy impacts everything from the name of the buttons and workflow — the "object repository" or "test case suite."
Legacy tools, tools born in the age of the explicit test case, will have a certain workflow. They'll expect you to do things a certain way. Sure, you can take the "test case" and call it a charter — but there will still be fields to fill out with steps to follow.
This has impacted me personally, even in teams where I was brought in to bring a style of testing that involved more exploring and less following directions. Even when we were consciously pulling out the steps to a business process and putting them in a different knowledge system, I still felt the pull of the scripting side. I would "wake up" in the middle of a planning session, finding myself typing steps in increasing detail, when what I really needed to do was leave creativity and autonomy to the person doing the work, who likely would not be me.
Tools impact our thinking and can lock us into patterns.
Two lessons to learn from this: First, spend some time on finding out the limits of your tools and make sure you address them. Perhaps more importantly: If tools limit your thinking, you'd be better off to pick ones that are the least limiting. Missing out on the dancing bear because you're human is one thing.
But missing it because you chose to wear blinders?
That's just silly.