You might hear me mention at some point that as a Scrum Master
, I don't really care about the technical details that make a product backlog item
done. I only really care about some form of measurable daily progress and the fact that these backlog items actually get done.
So, does that mean that the method for building said software
is unimportant? If all you care about is Scrum and that which Scrum
defines for you, then yes, it's unimportant. If you care about actually
making working software (which is the real goal), then it's not just
important, it's vital.
Ask yourself this, why do you always hear about test driven
development, pair programming, code coverage and other software
engineering techniques when talking about Scrum if Scrum, as a
framework, doesn't define these things at all? Because, they solve
problems that Scrum teams will likely run into. When a team builds
software with traditional project management, there are certain problems
that tend to hide themselves (or at least make themselves easy to
ignore). Scrum brings these things right up to the front and forces
these problems to be solved. Most of the time these problems are
technical in nature and should therefore be resolved by the Scrum Team
(not the Scrum Master).
This is one of the reasons why eXtreme Programming (or XP) is
often paired with Scrum. Many of the techniques found in eXtreme
Programming tend to solve problems that a Scrum Team is likely to
Let's talk about only one of these problems/solutions for now.
When a team first starts developing in short iterations
(which Scrum has you do), they will probably fall into a very common
trap. They will finish writing the code during the first part of the
sprint with the assumption that the latter half will be spent testing.
Here's the thing, this could actually work if you didn't find any bugs
in testing, but you will. So in reality, you'll have the latter half of
the sprint to find bugs, fix them, retest them, and fix any new bugs
that the fixes introduced. It's a confounding problem because individual
bugs can't be anticipated and therefore make estimation a nightmare.
What we need is some way to normalize the time we spend on quality in
the software, so we spend less time chasing bugs.
Let's try this again; let's say I'm a developer in the middle
of a sprint and I pull a backlog item from the sprint backlog, but
instead of immediately writing code, I take a second to chat with a
tester about how we'd like to test this when the item is done. While we
don't know what the code will look like yet we do know what the finished
code should do so we create a test case together and I go about writing
code to make that test case pass. Once I'm done writing my code, I know
that it'll pass the test we wrote but I pass it off to the tester just
to get another set of eyes on. He confirms what I already know, which is
that the backlog item is Done and we're that much closer to finishing
up the sprint.
In both cases, we still tested, but by writing the test up
front, we can gain some insight right away on how the code should be
Notice that I'm not telling you what tests to run, only that
you should define the tests before development. The tests themselves may
be anything from writing an actual unit test before writing the actual
code or it may be as simple as a conversation of "I'm going to do X, Y
and Z when I test it" before you sit down to make X, Y and Z possible in
the product. The more detailed the test plan is up front, the less
likely that you'll run into last minute surprises and that makes
everyone's life easier.