Assumptions, Risks, and Dependencies in User Stories
Don't gamble with your project. Learn where these three often hidden elements may be in your user stories and how they can derail your project.
Join the DZone community and get the full member experience.Join For Free
One of my clients discovered that a modest change to an existing application had an unexpected consequence: no policies could be written in the largest state for an entire market. An emergency fix was weeks out, and the interim solution was to fall back on a manual process…all because of one assumption in a requirements document. This was the first (but not the last) horror story that I heard when training teams to improve software quality at a major financial institution several years ago. No one was likely to forget that particular problem, but I still kept finding similar issues two years later.
Assumptions, risks, and dependencies occur in requirements documentation, use cases, and user stories, in some cases interchangeably. How well do your teams understand these terms? What are the consequences, strengths and weaknesses of each? How do you use them correctly in software development?
An assumption in requirements is an assertion on which a given requirement or set of requirements depends. Examples include:
Assumption: The test database will be available on 9/1/2019
Assumption: The SME will join the team as scheduled on 6/1/2019
Assumption: Localization files for supported locales will be completed in time for integration testing on 11/15/2019
Assumptions have been a problem in requirements documents for…well, forever. Assumptions allow a business analyst to document something without commitment. It’s a responsibility-free statement, and it is almost unique to software development. Imagine, if you will, applying the concept of an assumption to a legal defense: “Your honor, I assumed that my accuser was going to draw a gun.” Or to financial planning: “My retirement plan assumes a 20% raise every year.” Or to a marriage: “I assumed you would be OK with this carpet color” (or this car, TV, or dog).
In requirements documentation, assumptions are sneaky little time bombs that tend to blow up at inopportune times. An assumption is phrased in the form of a declarative statement, without regard for the reality or practicality of its elements. You can literally assume anything. There are 3 fundamental problems with assumptions:
- An assumption is not quantified in terms of likelihood. It is nakedly stated as a fact.
- An assumption is not quantified in terms of impact. “What happens if this isn’t true?” is not part of the structure of an assumption.
- An assumption has no required or inherent follow-up. In my experience, assumptions are frequently placed in requirements documents and then forgotten about. It is “assumed” that someone has added a tickler in a project plan to check on the status of an assumption, but that frequently doesn’t happen.
Assumptions are an artifact left over from an earlier and less rigorous requirements era. Remember, prior to Ivar Jacobsen’s methodology that introduced the world to Use Cases, there was effectively no formal process for documenting requirements. Or, at least, no consistent strategy for identifying, analyzing and documenting requirements, much less sizing and prioritizing them. Assumptions were a way of attempting to formalize connections to external conditions (systems, tasks, states, etc.) that would later be called dependencies.
The term dependency in software development has its origin in compiler optimization, in which functional units such as statements, blocks, and functions interrelate to create a final executable. Later the term would be applied to project management tasks, specifically with regard to order.
A dependency is simply a connection between two items (often tasks or teams, but can also apply to components, modules, or other software structures). A dependency asserts that the dependent (client) state cannot change (start or finish) until the dependency (precedent) reaches a certain state. There are four types:
- Start/Start – item A can’t start until item B starts. Cars in a motorcade can’t begin moving until the lead car begins to move. In fact, each car in a motorcade can’t move until the car immediately in front of it moves.
- Start/Finish – item A can’t start until item B finishes. The second stage of a rocket can’t fire until the previous stage has finished.
- Finish/Start – item A can’t finish until item B starts. Less common in software development, but an example would be an evening security guard who can’t end his/her shift until the guard on the next shift clocks in and begins their shift.
- Finish/Finish – item A can’t finish until item B finishes. My accountant can’t finish our income tax preparation until we have finished assembling all requisite data from the year.
All dependencies are a type of risk, which we will examine shortly. But for the moment, consider that since a dependency prevents one element (team, task, or component) from progressing, it represents a potential Bad Thing. Dependencies have similar problems to assumptions, though to a lesser degree. Unless some rigor in their use is imposed, they can be identified and then ignored, and are often not quantified in terms of likelihood and impact.
Generically, a risk is something Bad that might happen (in this context we add “…that affects the timely and correct implementation of software”). More formally (at least in Project Management circles), a risk is a potential negative condition that can be quantified in two dimensions:
- How likely is this condition to occur?
- What is the impact should this condition occur?
There are a number of scales that might be applied here, but let’s use a scale from 1 to 5 for each dimension. The total quantifiable risk is the result when the two quantities are multiplied, resulting in values from 1 to 25. Hence, a risk such as, “California may change its reporting requirements for middle market commercial property coverage” might be quantified as a 4 for likelihood and a 5 for impact, resulting in a total risk value of 20. A risk such as, “Our Java development toolset is going to drop built-in support for our database” might be quantified as a 5 for likelihood but a 1 for impact (because open-source solutions exist), resulting in a total risk value of 5. The various risks that affect projects can, therefore, be placed on a relative scale, in which the first risk (20) is clearly higher than the second risk (5).
It is important to note at this point that risks should be identified which affect the project, not the company. “A major customer may drop our account because of price competition” is an example of a change affecting the company primarily, unless software changes are being made at the insistence of the major customer. It is possible (even likely) that cataclysmic events affecting the company can ultimately affect the project in one form or another. Risks to the company don’t necessarily affect a given project, even though in practice they often will. Be clear about what the identified risk affects.
Note also that we don’t use a scale that begins with 0. There are two reasons for this: first, if it can’t happen (likelihood 0), or has no identifiable effect (impact 0), it isn’t a risk worth identifying. Secondly, multiplying any number by 0 always results in 0 mathematically, anyway. So focus on conditions that have some (minimal) chance of occurring or some slight impact.
It’s also useful to note how well this approach (originating in Project Management) dovetails with Agile. Identifying and quantifying risk gives Agile teams the ability to prioritize tasks. Some people are insistent that risk is a potentially negative external condition that can affect a project. But internal risks need to be identified and quantified as well. For example: “We will need to access a RESTful Web Service, and have no experience with such services” is an internal problem that a team might face, and could definitely affect the ability of the team to produce an effective solution. It makes sense in this case to document the situation as a risk, quantify it (possibly through a technical spike), and then plan for ways to mitigate any negative effects.
Which brings up the second beneficial aspect of using the Project Management approach to risk: it requires the creation of a plan to mitigate any negative impact on the project. In the case of the Web Service risk identified previously, it can be mitigated through technical training or access to skilled internal consultants or resources. This has an implication that isn’t immediately obvious: risks that can’t be mitigated are effectively not risks.
What does this mean? Flood insurance is a clear example of a mitigation plan to address a risk to low-lying property such as buildings or other assets. But we can’t create a mitigation plan to address the inevitable fact that the sun will eventually expand and consume the earth, at least with our current technology. In my career, I have seen companies merge and be forced to convert enormous software systems in order to function together, and there was nothing that could be done (at the time) to lessen that pain. At the very least, we need to understand what is possible with the resources available, and be realistic about how expensive risk mitigation can be.
Frankly, it’s amazing how many people in software development fail to understand this concept. Many years ago I was trained as an EMT, and the instructor tried to impress us with the need to prioritize our ability to help injured people (in emergency medicine this is called triage). Battlefield medics frequently have to decide between spending time and resources on an injured soldier that will likely die of their injuries and spending them on a soldier that has a chance of survival. It’s a harsh calculation that would feel very personal if you were the soldier in question, but from the standpoint of effective use of resources and maximizing survival rate, it’s the only strategy that makes sense. And the same thing is true of resources devoted to software development. Distinguishing what is real, concrete, and practical from what is theoretical, abstract or impractical is fundamental to effective software development.
Project management as practiced in companies with which I’ve worked specifies that a mitigation plan be considered and implemented for identified risks. This can be tweaked in various ways (such as not requiring a mitigation plan for any risk with a likelihood score of 1, for example). Since dependencies are a form of risk, it stands to reason that they should be included in any mitigation plan.
- Train your teams to avoid the use of assumptions in user stories, use cases or any requirements document. Excise existing assumptions from such documents by examining each one in detail, then either a) removing the assumption unilaterally, or b) converting the assumption into a risk or dependency.
- Use dependencies specifically to document the order of events, steps or stages (not components), and be clear about how the term is being used. Treat all dependencies as risks.
- Use risk to quantify likelihood and impact, using whatever scale is appropriate for your organization. Use the term and scale consistently across teams and projects.
- Establish a mitigation plan for (at least) high-priority risks.
- Remove any identified risks that have no possibility of mitigation, most effectively by bumping it up to any architectural planning group that oversees long-term development practices.
Opinions expressed by DZone contributors are their own.