Retrospectives, Part 1: In Your Own Sweet Way
Join the DZone community and get the full member experience.Join For Free
so, i’m not going to list someone else’s how-to’s, telling you to blindly follow the techniques acclaimed as best practices. instead, i’d like to focus on some heuristic essentials for a team to be self-sufficient with their retrospectives. when the essentials go home, the how-to’s are not a problem, they come effortlessly as your team intuitively knows what to do. that being said, i will still mention some of the techniques, questioning their practical value in a team-specific context.
heuristic refers to “.. experience-based techniques for problem-solving, learning and discovery. examples of this method include using a rule of thumb, an educated guess, an intuitive judgement, or common sense ” (a quote from wikipedia). let me reiterate that “experience-based” does not refer to the experience of external facilitators, but to the hands-on experience of a team as they try, discover, fail, learn and move forward. maybe i’m a bit biased, as we’ve never hired an external consultant or facilitator at targetprocess. it’s always been our own trial-and-error, common sense and intuitive judgement.
i’m a fierce proponent of heuristic approach to anything, agile retrospectives included, as this approach is about what delivers . no facilitator will do the job for a team, if their goal is to facilitate good retrospectives only. this is similar to using surgery on some body organ, instead of finding the real reasons for a disease, more deep down. like, the problem manifests itself in the heart malfunctions, but what it gets down to are the extras of the cortisol hormone, caused by stress. so, stress is the reason, not the heart.
anyway, common sense and critical thinking are not the only universal best practices. trial-and-error is a great best practice as well, but it does not apply to the heart surgeries (*black humor*).
retrospectives are one of the best practices for any agile software development methodology with a team-centric approach. you look back, evaluate what’s been done, see what could have been done better and make decisions for the future. basically, a retrospective meeting can be visualized as a climax of a feedback cycle in a series of loops.
when there’s enough or more than enough feedback, then it’s high time for a retrospective. in scrum, you can run an iteration-based or a release-based retrospective. if you do kanban, a retrospective can be run, when a new build is released, or on a just needed basis. from what i’ve seen and read, quite often retrospectives fade out in kanban, that’s also been the case with our team.
first, as we were doing xp, we used to run retrospectives by the book, for each iteration and for each release. then we switched to kanban and experimented with the retrospectives . now, as we’re about to deliver the completely new tp3, the dynamics has shifted. we’ve seen that the things that used to work for smaller releases, don’t work now as we’re on our way to the new product. so we’ve called ourselves on a retrospective, although we haven’t done one in about 7 months. i’m using this as an example to make a point that there’s no boilerplate rule of thumb for retrospectives. no one, less likely an outsider, can sense the dynamics of your team as well as you do.
the following visual is an anti-pattern for agile retrospectives. if it happens this way in your team, it’s a sign of a serious disease. i’ll cover possible reasons and cures in the next part of the series.
when at a retrospective, you need to create a context for discussion quickly. that’s where visuals come in handy. we use screens, wall boards, colored stickers; certain colors may stand for critical-mild-urgent issues to address, etc. but there’s no need to make too big of a deal out of it. obsessing over colors for stickers will not compensate the lack of team spirit, so first things first.
in fact, there can be a case when the context is already in place, e.g. a team comes to a retrospective already aware of the problems they should discuss. all the same, visuals will display the picture, and people’s brains will be busy making conclusions about the events, not keeping in memory all the events that are the basis for their decisions.
visuals at a retrospective (or post-retrospective) support the following 3 activities:
1. discover problems
historical data needs to be reviewed at a retrospective. the data can tell that everything is going great, or that you’ve got some problems. the trick is to spend as little time as possible on retrieving the data, and focus on the actual problem solving more fully.
take a look at the cumulative flow chart below, i’ve pulled it from our previous history. it shows that there was a bottleneck at the beginning of december.
as we tracked down the bottleneck, we were able to identify its reasons. the bottleneck was caused by a rather complex user story. one developer worked one month to implement it. during this month we did several releases, and everything went smooth. then this user story was qaed, and all the acceptance tests were passed. so we decided to merge this story to the main code line.
unfortunately, after the merge quite many bugs were found in the build. it took more than a week to fix those bugs, and during this time we were unable to release anything, since the merge was already done, and the rollback was quite hard as well.
the lesson learned is to put more effort into testing complex user stories. this particular story affected many places in the application and usual smoke testing was not enough . so we decided to introduce a new class of service (something like a “technically complex story” ) which would require a more in-depth testing and verification before the merge.
the cumulative flow diagram works good to identify bottlenecks. we’ve got another chart, the timeline. it zooms in on the details of a user story life cycle.
this chart gives answers to a number of questions, such as:
- for how many days has this user story been in this particular state?
- were there any delays?
- was there any re-work?
- who was responsible for the user story?
- when were bugs and tasks added and closed?
- how much time was spent each day?
- were there any impediments?
so, this user story was in the open state for 25 days (that is, in the backlog). then it jumped right into the in progress state. two developers (alla and vladimir) started working on it (so it was pair programming). they’d been working for 3 days, and then the story was moved into the re-open state. this is quite surprising, most likely they had to switch to something else (no good). then they got back and spent 15 days working on this user story. that’s way too long. most likely there were switches as well, so this should be investigated. starting from oct-18 the progress was very good: development went smooth, the tester found several bugs and they were fixed in 2-3 days. finally, the user story was released to production with no more delays.
i’ve given 2 simple examples of how we use targetprocess to support retrospectives. another great visual we’re using is targetprocess card rotter .
historical data can be visualized in many-many ways ( check this article for some inspiration).
you can also go from human moods and emotions to discover problems at a retrospective. here’s a very simple diagnostics chart, the happiness radar (by paulo caroli ):
people made some marks for the areas they’re happy, neutral or pessimistic about, and we can see that process and technology clearly lack happiness.
2. solve problems
as the problems are identified, you need to think about the solutions. mind maps work great to visualize problem solving. in fact, we do a mind map subconsciously whenever we discuss something and sketch the thoughts on a piece of paper. this simple yet powerful technique can be used at retrospectives as well, and that’s what we’ve been doing. mind maps can be drawn on a board, or on a screen, or just on a piece of paper.
check out this mind map. it has been used to think about the problems and activities that influence the speed of development.
looking at this sketch, you can make a conclusion that only two things can improve the velocity directly: fast feedback and experienced developers ( while there are many waste things, such us unplanned work, interruptions, multi-tasking, rework, high coupling and technical debt).
visualization brings the issue of speed on a plate and breaks it down into smaller pieces:
- how to deal with customer requests and reduce unplanned work?
- what should we do with the urgent bugs?
- how can we do more training?
- how do we break work into smaller batches?
- what should we do about noise and interruptions?
if you ask questions like these at a retrospective meeting, you can expect many good ideas. if you just ask: “how can we work faster?”, the answer might be silence and confusion.
in the previous example, the mind map is built as a network and inspires the goal-oriented questions. the 5-whys root cause analysis looks into the reasons, so the map would be sequential, going from one why to another:
i picked up this image from sandy mamoli , as the problem they’ve been working on at their retrospective is quite common, and the answers to the 5 whys are typical.
now, in line with the happiness radar above, that’s how they visualized possible solutions using the “what will increase happiness?” chart with the “keep it”, “more of” and “less of” sections:
3. follow through on the action items
you’ve identified the problems, pondered over them and come up with some action items after a retrospective. i’ve searched the web through and through, but to my surprise, it seems that very few teams are using kanban board as a visual tool to track their retrospective action items. it appears there’s a reason for that. with kanban, you need to have an opening state and a final state, it’s for straightforward one-time actions. contrariwise, retrospectives often reveal the need for recurring actions, of which not a single person is accountable, but the whole team. these are the collective ownership items.
for instance, you see that inconsistencies in user stories are discovered only at the final testing phase. you run this issue by a retrospective, and you decide that a user story launch meeting should be done before the start, and the inconsistencies should be taken care of at this very meeting (e.g. the specs should be discussed and approved). so, the action item here would be to include writing functional specs as a task for a user story and check if all the stories that are currently in progress have the specs.
back to the problem of over-promise and under-delivery featured above (a very common problem, by the way). now, you decided to set limits to your work in progress. someone needs to write mash-ups so the limits can be tracked easily on the kanban board. this is a one-time actionable item.
you also decided that urgent bugs deserve more attention. someone should think over the process for such bugs.
here’s a very basic kanban board with these tasks:
on the kanban board below, you clearly see the limits of your wip, and the red light turns on any time when your wip rules are violated. no need to keep the limits and the details in memory, the kanban board would send a visual signal about the problem - a helpful follow-through for your retrospective.
next, about the bugs. they are now tagged “urgent” and “sup”, and they should be tracked closely+comfortably=visually. on the snapshot below, the red cards stand for the “urgent” bugs and the salmon cards for the “sup” bugs (“sup” means “support”). this custom visualization has been done with one of our mash-ups .
there’s also the team load mash-up, that shows how much work, and which work, is in progress for every team member. another good follow-through technique for retrospective action items. blue stands for user stories, red stands for bugs, team members are on the pics (including voltaire :)
continue to retrospectives, part 2: in a sentimental mood
Published at DZone with permission of Olga Kouzina, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Microservices With Apache Camel and Quarkus (Part 2)
How To Scan and Validate Image Uploads in Java
A Data-Driven Approach to Application Modernization
Front-End: Cache Strategies You Should Know