User Journey, A Way to Prioritize Your Test Strategy
A typical user takes a rather circuitous journey before they finally purchase your product. Understanding this journey can help your Agile/Scrum testing teams.
Join the DZone community and get the full member experience.
Join For FreeIn the current age of delivering highly complex digital applications, in a competitive market for high-expectation end users, maintaining quality seems like a daunting task. Furthermore, Agile cycles are shrinking to meet competitive deadlines.
A typical product owner needs to balance 3 Sprint investments (frankly, on a daily basis): innovation, tech debt and bugs, and testing. They all need to be optimized. In the case of testing that means considering efficient parallel executions, the ongoing availability of devices and browsers, reliable scripting and executions (avoid false negatives), and reporting suite efficiency, to name a few.
One topic to consider is prioritization of test executions. The traditional approach would be to look at atomic test cases per platform (device or browser) and execute all related tests on that platform. One customer that I talked to recently spoke of 4 main platforms, on which 90% of tests are executed, and then there are 7 more which share the rest of the executions.
A slightly different approach to prioritization and reporting is taking the perspective of users’ journeys rather than singular user flows. For example, as a consumer of Amazon (assume for a minute I'm researching a slightly expensive item) I would:
- Probably hear from a friend about this product and likely check it out on the mobile app, maybe add it to my cart.
- Come home, take another look at it, compare it to other products and look at reviews. Maybe then buy it. That's likely to happen on a desktop browser.
- I would then track shipment via email.
- Let's say I got the product – I'm not happy with it and I'd like to return it. I'd probably initiate the return and print the shipping label on my desktop browser.
- Again, I'd track the return progress via the app and email.
A typical customer journeys across multiple digital channels.
To summarize, you could think of two or three journeys here: search for the product, buy the product, and return the product. These journeys could happen on a mobile app, desktop browser, and perhaps a tablet. But not all journeys happen on all devices or one: unless it's a cheap product, I will probably not buy it on the app, and I'm less likely to fire up a desktop browser to look up a product based on a friend’s recommendation if I'm out and about.
Take the example below from the insurance world. Here again, customers would go through phases in different channels and screens.
Insurance customer journey across digital channels Source: Remarkgroup.com
When you consider user journeys, they also represent a measure of marketing and business success: how many users could buy the product? How many users were able to contact customer support and get the help they needed? The business doesn't care about the platform you tested it on, as long as it matches what most users are using.
To summarize, we recommend using analytics to define these user journeys on relevant devices and browsers. Consider prioritizing your testing and reporting to align with those. Trying to test all user flows on every device is unrealistic and will drive you to lose the insight that you seek.
Make sure to check out this post on different test approaches (BDD, TDD/ATDD), and this post on RTDD - both are very relevant to this discussion.
Published at DZone with permission of Amir Rozenberg, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments