DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
What's in store for DevOps in 2023? Hear from the experts in our "DZone 2023 Preview: DevOps Edition" on Fri, Jan 27!
Save your seat
  1. DZone
  2. Culture and Methodologies
  3. Agile
  4. Measuring Customer Satisfaction After Release

Measuring Customer Satisfaction After Release

Great! You've released a new app. But how do you know if people are using it, how they're using it, or if they like it? Read on to find out.

Francis Adanza user avatar by
Francis Adanza
·
Aug. 30, 16 · Opinion
Like (4)
Save
Tweet
Share
2.28K Views

Join the DZone community and get the full member experience.

Join For Free

A substantial amount of time, money and effort go into product development when it comes to software. Assuming that the application isn't on a continuous delivery cycle (meaning the build is ready for release at any stage of development), teams need to do everything in their power to get the initial release cycle rolling. This requires generation of user stories, construction of the bare bones of the program, testing each new function and feature as it's added, organizing user acceptance testing, and eventually, unveiling the initial release. 

The day that the very first iteration goes to market is certainly cause for celebration, but not relaxation. In fact, the work is just getting started. In the old days of waterfall development, bulky releases clambered their way to a grandiose debut, at which point a sense of accomplishment would start to settle in. But in order to stay apace with the swiftly evolving expectations of customers – not to mention the tireless efforts of hackers to expose faults in the code – a greater level of agility is demanded of organizations. From the moment that the first build is released, teams have to get back to work. 

The first order of business? Measuring customer satisfaction. 

Where to Begin

There's a lot more that goes into customer satisfaction than running the sales numbers. In fact, it's even more complicated than determining the overall "quality" of the solution. According to TechTarget contributor Robin F. Goldsmith, customer satisfaction and quality aren't always synonymous.  

"Indeed customer satisfaction should be a result of delivering a quality product, but satisfaction can be influenced by many things and is not the same as quality," Goldsmith wrote. "Moreover, customers routinely are satisfied by poor quality and not satisfied by high quality."

In other words, the first lesson of customer satisfaction can be boiled down to a trite, but no less truthful, platitude: The customer is always right, even when the customer is wrong. As such, it's important that as a preemptive measure before the actual development even begins, all parties that will be involved in the product's creation are also included in its inception.

Specifically, developers, testers, and designers need to be on the same page regarding which user stories matter most to the customer. These can range in complexity, from the fairly basic, to the extraordinarily elaborate. Thus, ensuring customer satisfaction after release starts long before the first line of code is even written with strong user story mapping. 

Post release: User Acceptance Testing

Ideally, user acceptance testing should occur prior to the initial product release. Sometimes referred to as beta testing, UAT is by definition how QA management teams assess customer satisfaction with a product, according to TechTarget. However, it's almost more important that UAT occurs after the initial release. While developers can learn a lot about a deliverables reception through online ratings and reviews, as well as the implementation of customer satisfaction survey tools, the most insightful feedback will come from deliberate test case execution. 

This is because in most agile development models, the time between builds is sometimes one or two weeks. This means that in addition to unit and regression tests that need to be run as an application is improved upon, test cases for UAT should continue to be implemented – not just to business stakeholders, but also to customers.

Optimizing UAT in an Agile Environment

So the question then becomes, what's the best way to manage UAT on a rolling basis? The answer, according to TechTarget contributors Ravindra Kambhampati and Srinivas Yeluripaty, really comes down to process:

"An application being developed in an agile mode leads to reduced and frequent cycles of testing, which in turn mandates UAT testers to develop skills of optimization testing techniques, automation and work in cohesion with the development and QA teams," the authors wrote. 

This requirement for successful execution of UAT in an agile setting is an accomplishment unto itself. Organizations must first lay the cultural groundwork for agile software development by reshaping their production hierarchies. Unlike a relay race, in which the baton is handed off from one runner to the next, developers, testers and designers run alongside one other during each sprint. This requires teamwork, a shared sense of accountability, organization and a willingness to learn.

Once these institutions are in place, UAT testers must put them into action with the help of test management that features real-time tracking of test metrics, as well as the ability to create and execute test cases. This will help QA get verification regarding its application of the specific parameters outlined in the user stories on an ongoing basis. 

In many ways, customer satisfaction is more about what happens after the initial release than the events leading up to it. After all, the main benefit of agile is that it gives organizations the ability to respond quickly to feedback. As long as you have the mindset, the processes and the tools and place to solicit this feedback, track it and then adjust accordingly, you're in a good position to measure customer satisfaction between sprints. 

Release (agency) agile

Published at DZone with permission of Francis Adanza. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • How to Check Docker Images for Vulnerabilities
  • Pros and Cons of Using Styled Components in React
  • Using QuestDB to Collect Infrastructure Metrics
  • Dockerizing an Ansible Playbook, Part 1

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: