One Simple Trick to Make the Business Interested in Your Acceptance Tests!
Here's the secret: the "business guys" are just like you. They simply don’t want to invest time in activities that don’t seem essential for your business to run.
Join the DZone community and get the full member experience.Join For Free
Many books on effective writing have been published, yet, instead of looking for them, you arrived here.
You chose an article that promises you “one simple trick,” a “5 minute read” instead of hours of research! Why?! How did that happen? You’re not lazy, you’re not ignorant, and you definitely care about your business and your organization enough to look for ways to improve them! Could it be that you simply prefer to read something short and concise before you invest your time in further research? Especially when the return on your investment seems neither obvious nor quick at the first glance?
So here’s the secret, believe it or not: “the business guys” are just like you. They’re not lazy, they’re not ignorant and, just like you, they’ve got enough on their plate already. They simply don’t want to invest time in activities that don’t seem useful for them to do and essential for your business to run.
Think about it, having to sift through long and complicated test plans, convoluted test scenarios and execution reports of half-failing, flaky tests that are difficult to relate to business does not sound like a good use of anyone’s time! So what’s the simple trick I promised to get “The Business” interested? Make your acceptance tests follow my “Pirate Rule” and you’re done:
Useful acceptance tests are Accessible, Relevant, Reliable. ARR! How do you do that?! That’s another question.
The Big Picture
Acceptance tests are relevant when their goal is clear and their business value obvious.
Think about how you learn. It usually helps to start with the big picture first and then break it up into manageable pieces.
The same holds true for how you communicate the scope, coverage, and results of your acceptance tests to the business.
Imagine a Product Owner who receives a flat list of hundreds or thousands of test scenarios with the question, “is that what you wanted for the next release?” Hell no, who in their right mind would have ever wanted to receive that list? I’ll agree to anything just make it go away! (Sadly, a true story.)
Or even worse, “here are the test scenarios, 7% of which is failing. You still want us to deploy to prod tonight?” What 7%? ‘Some database integration tests’? Yeah, whatever, it doesn’t seem like anything important and it’s just 7%, right? (Again, a true story.)
“The whole is other than the sum of its parts.” — Kurt Koffka
“More data” does not mean “more information” and the way you present information to the business matters.
We already know that asking a business person to review a long flat list of test scenarios (maybe even conveniently exported as an Excel spreadsheet) is not the best way to go.
That’s why tools like Cucumber allow you to introduce another level of abstraction: features.
Thinking in terms of features the system provides rather than individual scenarios makes it easier to reason about the system. However, it’s still too low-level to be business-friendly.
Consider an e-commerce system, for example.
Such a system might have tons of features: some of them related to finding the right product and making a purchase, some responsible for the supply chain management, billing, shipping, returns, marketing campaigns, etc.
Do you see what I just did there? When I gave you the executive summary of the features the system provides, I grouped them by business capabilities they enable:
- Finding products
- Making a purchase
- Managing supply chain
- Managing product returns
- Marketing campaigns
I found that introducing this level of abstraction enables much easier communication with the business. It helps you talk about the system you’re building using vocabulary from the business domain and focus primarily on the capabilities a business needs to achieve their goals. Of course, how those capabilities will be provided (features) is also important, but the initial focus should be on why (business goals), and what (business capabilities).
Working with this level of abstraction also helps business people make decisions; consider a difference between 7% of tests are failing and we should fix them before we release, and there are problems with ‘shipping’ and we should fix them before we release. Which information you think will be more interesting to the business and is more likely to have their attention?
Pro tip: If you’re using jUnit with Serenity BDD to drive your tests and generate living documentation, adding business capabilities to your test execution report is actually pretty easy:
- Place the jUnit test classes under packages corresponding to the business capability name, such as 'managing_supply_chain', in a top-level package such as 'com.my_organisation.my_system.features'.
- Tell Serenity where to look for those packages ('serenity.properties'):
Serenity BDD maps packages of features to business capabilities
Acceptance tests are reliable if they fail only when the observable behaviour of the system has changed.
Presenting acceptance tests in the context of business capabilities they exercise is critical to making them relevant and interesting to the business. However, if you want to make them truly useful, they also need to be reliable.
When business people see that your tests are “flaky” and give inconsistent results, their trust is lost and it’s going to be difficult to rebuild it.
It doesn’t matter if it’s because of the test data, unavailable or poorly managed environments, five hundred people making changes to the system while you’re running your tests against it, or whatever else — all those come across as excuses, if the only information you give to the business is that sometimes the tests pass and sometimes they fail.
Ideally, the system under test should be isolated, the test data and environments should be under your control and the tests should only ever fail if the observable behaviour of the system has changed. Sometimes, however, especially when working with legacy systems, this may not be the case.
In those situations, it’s important to ensure that you clearly communicate reasons for those intermittent failures. Again, ideally in the context of business capabilities, exercising of which has been compromised.
Consider a difference between "tests always fail because the environments are rubbish" and "we can’t verify if ‘shipping’ works. The tests fail due to the system experiencing performance issues."
Which information you think can help you attract more attention to the importance of fixing the underlying performance problem?
Pro tip: Serenity BDD uses the idea of “semantic exceptions” to make it easier to report those sorts of problems. Semantic exceptions help highlight“compromised” tests that fail because of underlying environmental issues or unmet preconditions. This helps to differentiate tests that should be improved from those that fail for the right reasons.
Leaving Spreadsheets Behind
Acceptance tests are accessible if anyone interested in them can easily access their most up-to-date version.
One more thing to consider is how accessible and up-to-date your test scenarios and reports are. There’s no point for them to be reliable and relevant if the business can’t find them or if what what they find does not reflect the current state of the system.
This problematic situation usually occurs when acceptance tests are not executed as part of a delivery pipeline, but rather on an ad-hoc basis (this happens more often than you’d think, especially when performance problems of poorly structured test suites negatively affect the execution time).
In those circumstances, I’ve seen test execution reports generated as Excel spreadsheets and put on some network drive, where they can sit forgotten, or shared via email, so they can see the end of their days accompanied by other spam…
A much better way to go about it would be to execute the acceptance tests as part of your delivery pipeline and publish test reports together with other project artifacts so they can be easily accessed in one place together.
Published at DZone with permission of Jan Molak, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.