90 Sprints for Capital Markets – Part 4 of 4
90 Sprints for Capital Markets – Part 4 of 4
In this long-awaited series finale, we wrap up by dispelling some of the myths around Agile estimation, prediction, and time management.
Join the DZone community and get the full member experience.Join For Free
The Agile Zone is brought to you in partnership with Techtown Training. Learn how DevOps and SAFe® can be used either separately or in unison as a way to make your organization more efficient, more effective, and more successful in our SAFe® vs DevOps eBook.
In the final part of my series (see Part 1, Part 2 and Part 3), I will take a closer look at one of the most elusive issues: estimation and tracking of time. Concrete numbers are often expected, but do we really need them at all costs and in all areas?
Myth #14: “Business Stakeholders Require Estimates in Hours”
We’ve all had these conversations: “Please give me an estimate for X in hours.” It is the default way of thinking. In my opinion, however, attempting to give estimates in hours can be harmful to the development team. When talking about time, people tend to make overestimations just in case. And that, being perceived as commitment (if it really was an accurate commitment, it would be called “accurate”), puts unnecessary pressure on the team. Usually, a time estimate depends on who is implementing a feature. As a result, time estimates are quite useless. Developers cannot provide estimations in hours for the same reason that business users cannot describe the value of a feature in dollars. However, just as the Product Owner can describe that A is more important/critical than B, the Team can say that B is “larger” than A.
When do we need an estimate and do we really need it in hours?
- When we need to compare size and complexity of a feature (a cost). For this, an absolute value is not required. Any relative size description will work for a Product Owner to decide if a feature is worth the effort compared to other features.
- When we need to predict the time when a feature is “done” (for us, this means tested and deployed to the UAT environment). This is easy: “At the end of the Sprint.” For Epics, the team should be able to predict if it can be done in one Sprint, or if it will require two or three.
- When we need to plan a Sprint. Using some numeric values for estimates helps a lot. However, the Team should have some limits on large Stories. For example, we know that taking more than two 13-story-point items on a Sprint will cause trouble.
For Story Point estimates, we use the Fibonacci scale. Each value is a range, not the exact value, thus it really reflects the meaning of “estimate,” involving a factor of uncertainty. A “5” means that it is somewhere between “3” and “8.” The larger the Story, the less certain we are about the size. Even if blind arithmetic operations are pointless on estimates (dividing an “8” won’t always give you a “3” and a “5”), you can still have a burndown chart and calculate velocity, focus factor and other KPIs on the historical Sprint reports. It’s tempting to calculate [man-hours in Sprint] / [average Velocity] * [avg developer’s rate per hour] to convert story-points to dollars. However, this will produce misleading results.
Myth #15: “You Cannot Predict a Project Budget with Agile”
I’ve learned the Rational Unified Process, which seemed like the perfect solution to all problems – at least theoretically. Having a perfectly designed system with a detailed plan and budget before writing the first line of code is a dream scenario. Then you just need to convert designs to software, following the steps from the Gantt chart — and you’ll meet the budget. Then changes come. Changes in technology, environment, requirements, regulations – and they catch you in the middle of the implementation phase. Changes always come at a cost and the budget plan will change a lot. Going back to design will cost even more.
I’d say that frankly, Agile gives more control over the budget. After the first few weeks, a working part of software should be available. It’s easier to assess its value than reading pages of design documents, especially as a non-technical person. Full transparency of the process gives the management a deep insight into the current priorities and deliverables. And what is most important, Agile is designed for change. Whenever the business environment or requirements change, it’s a good moment to introduce the required modifications into the system. At the same time, some less important “nice to have” features can be cut anytime for savings.
Agile Budgeting or even Lean Budgets is a very complex topic and might be hard to achieve. However, I’m sure that through practicing Agile development, delivery and management, the process will be appreciated even in the traditional budgeting model.
Myth #16: “You Need to Control the Time Spent on Each Task”
You need to ask what the real purpose of this exercise is. Does counting the hours spent on work actually measure the progress of the project? Or is it just the default “command and control” habit? The only known way to put any control on the process? I don’t believe that it’s possible to really measure the time spent on individual tasks including testing, post-test fixes, improvements etc. Remember that each feature “lives” as it is being developed. And even if you measured the hours – what would you do with this knowledge?
In the end, the overall progress of a project can be measured using some simple metrics usually provided by issue tracking tools.
- Version Burndown Chart (or Burnup Chart) – the number of User Stories (or the sum of Story Points) that are done and delivered to the UAT environment compared with the items in the backlog. This answers the question “Where are we?” and allows to extrapolate the trend to have an idea about the finish date.
- Time To Market – the number of days that Stories spend from being picked up by the team to the production release. Analyse the histogram, not just the average.
Full transparency of the Agile process combined with estimation in Story Points and decent issue tracking give you a proper foundation for putting detailed and useful metrics on the project and process. And this is much more interesting than just having a Burndown Chart and measuring Velocity. Here are some metrics we’ve used in our process:
- Quality of Sprint entry – the percentage of Stories picked to the Sprint that met the Definition of Ready. Ideally: 100%.
- Sprint stability – the percentage of in-Sprint additions/removals (in Story points). Ideally: 0%.
- Estimate accuracy – Percentage of Story Points completed vs committed – Ideally: 100%. More than 100% implies overestimation, and that’s not good either.
- Tech-debt backlog growth – The size of technical debt backlog in SP compared to the last Sprint. Ideally: 0% or a negative value.
Any ups and downs in average Velocity (measured in Story Points) need to be compared with Team capacity (vacation, training etc.) and the above metrics. The ultimate goal is to render the process stable and predictable. Velocity is not the same as productivity.
However, there is something that we do measure in hours. It is Waste. Team members log every hour wasted on unnecessary efforts or when they wait for something. The whole waste log is categorized and then reviewed during every Sprint Retrospective together with the Product Owner. This provides a lot of information that can be summarized, plotted and shown to the management in order to support an improvement plan and investments with expected savings. This works best when multiple teams struggle with similar issues. Here are the categories we use – and some examples:
- Process Waste – manual work that can be automated, poor performance of tools, wasteful and useless meetings.
- Wait Time – excessive waiting for answers, waiting to resolve access or technical issues.
- Technical over-processing – working with legacy code, manual configuration of environments.
- Unplanned disruption – production issue support, unplanned request for assistance of external teams.
- Functional over-engineering – over-engineering features of low value, implementing functionality abandoned after delivery, insisting on custom solutions rather than reusing common patterns (mostly UI).
- Rework – excessive rework due to starting with incomplete requirements, throw-away work due to in-sprint change, repeat work across sprints, a lot of defect correction, “debt servicing”.
This tool allows us to identify the most critical issues with the process, environment, tooling (or lack of tooling), quality, etc. and be able to establish the problem’s scale. Sometimes the only improvement needed is to simply agree on a step in the process, sometimes it’s enough to hold a discussion and come to an agreement with management, but it may turn out that investing in automation and tools will be worth it.
Ultimately, make sure you’re “using Agile” rather than “doing Agile”!
Published at DZone with permission of Piotr Gwiazda . See the original article here.
Opinions expressed by DZone contributors are their own.