Measuring Customer Satisfaction After Release
Great! You've released a new app. But how do you know if people are using it, how they're using it, or if they like it? Read on to find out.
Join the DZone community and get the full member experience.Join For Free
A substantial amount of time, money and effort go into product development when it comes to software. Assuming that the application isn't on a continuous delivery cycle (meaning the build is ready for release at any stage of development), teams need to do everything in their power to get the initial release cycle rolling. This requires generation of user stories, construction of the bare bones of the program, testing each new function and feature as it's added, organizing user acceptance testing, and eventually, unveiling the initial release.
The day that the very first iteration goes to market is certainly cause for celebration, but not relaxation. In fact, the work is just getting started. In the old days of waterfall development, bulky releases clambered their way to a grandiose debut, at which point a sense of accomplishment would start to settle in. But in order to stay apace with the swiftly evolving expectations of customers – not to mention the tireless efforts of hackers to expose faults in the code – a greater level of agility is demanded of organizations. From the moment that the first build is released, teams have to get back to work.
The first order of business? Measuring customer satisfaction.
Where to Begin
There's a lot more that goes into customer satisfaction than running the sales numbers. In fact, it's even more complicated than determining the overall "quality" of the solution. According to TechTarget contributor Robin F. Goldsmith, customer satisfaction and quality aren't always synonymous.
"Indeed customer satisfaction should be a result of delivering a quality product, but satisfaction can be influenced by many things and is not the same as quality," Goldsmith wrote. "Moreover, customers routinely are satisfied by poor quality and not satisfied by high quality."
In other words, the first lesson of customer satisfaction can be boiled down to a trite, but no less truthful, platitude: The customer is always right, even when the customer is wrong. As such, it's important that as a preemptive measure before the actual development even begins, all parties that will be involved in the product's creation are also included in its inception.
Specifically, developers, testers, and designers need to be on the same page regarding which user stories matter most to the customer. These can range in complexity, from the fairly basic, to the extraordinarily elaborate. Thus, ensuring customer satisfaction after release starts long before the first line of code is even written with strong user story mapping.
Post release: User Acceptance Testing
Ideally, user acceptance testing should occur prior to the initial product release. Sometimes referred to as beta testing, UAT is by definition how QA management teams assess customer satisfaction with a product, according to TechTarget. However, it's almost more important that UAT occurs after the initial release. While developers can learn a lot about a deliverables reception through online ratings and reviews, as well as the implementation of customer satisfaction survey tools, the most insightful feedback will come from deliberate test case execution.
This is because in most agile development models, the time between builds is sometimes one or two weeks. This means that in addition to unit and regression tests that need to be run as an application is improved upon, test cases for UAT should continue to be implemented – not just to business stakeholders, but also to customers.
Optimizing UAT in an Agile Environment
So the question then becomes, what's the best way to manage UAT on a rolling basis? The answer, according to TechTarget contributors Ravindra Kambhampati and Srinivas Yeluripaty, really comes down to process:
"An application being developed in an agile mode leads to reduced and frequent cycles of testing, which in turn mandates UAT testers to develop skills of optimization testing techniques, automation and work in cohesion with the development and QA teams," the authors wrote.
This requirement for successful execution of UAT in an agile setting is an accomplishment unto itself. Organizations must first lay the cultural groundwork for agile software development by reshaping their production hierarchies. Unlike a relay race, in which the baton is handed off from one runner to the next, developers, testers and designers run alongside one other during each sprint. This requires teamwork, a shared sense of accountability, organization and a willingness to learn.
Once these institutions are in place, UAT testers must put them into action with the help of test management that features real-time tracking of test metrics, as well as the ability to create and execute test cases. This will help QA get verification regarding its application of the specific parameters outlined in the user stories on an ongoing basis.
In many ways, customer satisfaction is more about what happens after the initial release than the events leading up to it. After all, the main benefit of agile is that it gives organizations the ability to respond quickly to feedback. As long as you have the mindset, the processes and the tools and place to solicit this feedback, track it and then adjust accordingly, you're in a good position to measure customer satisfaction between sprints.
Published at DZone with permission of Francis Adanza. See the original article here.
Opinions expressed by DZone contributors are their own.