I love when other people write a blog post at the same time you are struggling with the problem that they describe. Today’s timely post comes from Michael Bolton on his DevelopSense blog. In “The Undefinition of Done”, Michael talks about the fact that people typically have different definitions:
One issue, as I’ve pointed out before, is that people have preset notions about what “done” means to them. When I’m writing an article for publication, I put a bunch of effort into several drafts. When I’m finally happy with it, I declare that I’m done and submit it to the editors… Of course, I’m not really done. The article goes through a round of technical editing, and a round of copy editing. Inevitably, someone finds problems that have escaped my notice. They send the article back (typically when I’m busy with something else, but that’s hardly their fault). I fix the problems. And I submit the article for publication. Then I’m done. Done-done. The article is ready for publication.
In software development, “done” is definitely a confusing term. People looking at “done” from a testing perspective typically state that a feature is done when it has passed its tests. In the waterfall development process, a feature is only done when the entire project gets tested. That is not a useful definition because a waterfall project typically takes months to complete, without any feedback to the developers or the business owners.
In order to avoid the feedback problem in the waterfall process, people tend to move to agile processes. Many styles of agile have the concept of a sprint, a duration of time where development tasks are supposed to be completed. Let’s assume that you have sprints defined as having a two week duration. The goal of the development team is to create a list of tasks that they will complete within the two weeks. For that team, the definition of done is important. If the developer’s need to be “complete” within two weeks, a clear definition of done is needed. Developers tend to think of “done” as the time when the task has been fully developed and unit tested.
The problem with this definition of done is that the task is not really fully completed. If we follow Michael’s article example, the task will have some issues, defects that have to be corrected. When you think of it, done should mean that a task is fully developed, unit tested and passing QA.
One problem arises from this defect fixing process. Agile is dependent upon completed tasks in order to determine the team’s velocity, or the amount of work completed in the sprint. With defect fixing process delaying the “done” state, a task could have been started in one sprint and not completed until two sprints later. This can cause the team velocity to fluctuate heavily between sprints. Velocity is supposed to be used to determine how much work a team will complete in a sprint. I have seen some cases where the estimated work is based on the average of the velocity after the past two sprints. Other people have a more complicated process for estimating the work in a sprint. In either case, the delay of completion can cause issues with planning work for the sprint. In order to keep things simpler, I have also seen the average velocity calculated over the entire project in order to take the fluctuations into consideration.
In reality, you just need your calculations to be consistent and everyone needs to agree to the same definition of done.
What do you do to deal with the varying definitions of done? How does this change your calculation of team velocity?Follow the conversation at YackTrack!