Walking Through an AgilityHealth Survey
Walking Through an AgilityHealth Survey
Zone Leader John Vester's team gets audited by an online assessment survey, and shares his thoughts on the process.
Join the DZone community and get the full member experience.Join For Free
[Latest Guide] Ship faster because you know more, not because you are rushing. Get actionable insights from 7 million commits and 85,000+ software engineers, to increase your team's velocity. Brought to you in partnership with GitPrime.
This past week, my agile team participated in an AgilityHealth survey. For those who are not aware, AgilityHealth is an online assessment tool, created by Agile Transformation, geared to gauge the performance and health of agile teams. This was the second time my team participated in the AgilityHealth event, which spans about three hours.
How AgilityHealth Works
The AgilityHealth process begins with the agile team assembling in a dedicated meeting space. An overview of the process is given and the team decides what ground rules they will follow for the event. In our case, we made it a rule to use a token (a rubber band ball) in order to be able to address the group. We also included a rule to keep on our timeline so that we didn't exceed the allotted three hours.
The team then reviewed the five sections for the survey. In our case, we were using the TeamHealth survey, which included categories of Foundation, Clarity, Performance, Leadership and Culture. Each section contained sub-categories and there were multiple survey questions within each category. The questions were scored on a scale of 1 to 10 and included a summary to represent a high score (10) and a low score (1). Using desserts from The Cheesecake Factory as an example, the summaries may resemble what is displayed below:
High Score - Hands-down, this is the best cheesecake that I've ever experienced in my life.
Low Score - The cheesecake lacks a quality taste and is not something I will ever try again.
Each section also includes comment fields, for open-ended comments. Finally, there are some open-ended comment sections at the end of the survey.
Once the survey information is submitted and processed, the results are mapped against a radar chat similar to what is displayed below:
If prior AgilityHealth results exist, they can be mapped on the same radar chart. From there, the team talks about the results, including review of the comments. Discussion around prior take-away's are discussed, if they exist. Finally, new take-away's are added from the current survey - which can be tasked out and discussed during future team meetings.
What We Learned
As I've noted in prior posts, my team has been in the Performing agile phase for quite some time now. As a result of having talented resources and a demand-based IT world, we have lost a couple team members since our last survey was completed. So, I was interested in seeing the results of our AgilityHealth survey - given the fact that we have been operating with less staff than the other agile teams in our Software Engineering group.
When compared to our AgilityHealth results last Fall, our team continued to score well in the Culture, Foundation and Performance areas. As we've gotten acclimated to our new Product Owner, we saw a boost of improvement in the area of Clarity. The Leadership segment of our results were mixed. In a positive way by having an excellent management structure supporting the team, and neutral due to the size of the team and having individuals maintain multiple roles. Overall, the results supported the achievements our team has been able to accomplish, which was a comforting validation.
Surveys in General
A few years ago, I worked with a project manager (named Kevin) who had an interesting take on survey design. When building a survey that asked users to provide a ranked answer, Kevin always opted for an even set of numbers with limited choices. When possible, Kevin would select only four or six options.
I agreed with Kevin's thought process. Think about when you've completed a survey for a product or service and the question provides answers on a scale from 1 to 10. Is there really a standard way to determine what is a 2 vs a 3, or a 7 vs an 8? However, if you take Kevin's approach and limit the responses to six numbers, it becomes easier for responders to provide an answer that backs a standard:
6 = best ever
5 = great
4 = just above average
3 = less than average
2 = bad
1 = worst ever
I know it might be tempting to add a 7th item, to allow for average. But, as Kevin often pointed out, giving the option for a middle of the road answer provides a quick and easy way out for the responder. Without the middle option, you make the responder think for a second if the answer is just above (or just below) average for the given question.
I think the concept of AgilityHealth is great. Regardless of what phase an agile team is currently in, there is always room to improve or something to learn by participating in this type of event. My only suggestion to the team at Agile Transformation, is to consider reducing their survey options from 10 possible answers down to six. In doing so, I believe it is easier for the team members to provide a score based upon a standardized system.
Have a really great day!
Opinions expressed by DZone contributors are their own.