The path of a requirement in a large organization is often foreign to those more familiar with small or medium companies. In smaller arenas, developers and testers help define requirements. Everyone has a clear view of what's being built because they had a hand in defining and refining the ideas. Do you have a question about what this feature should be or what this report should have? Go ask Mike or Sue. It was their idea.
Enterprise scale is different. When the program budget is half a billion dollars and there are a few thousand developers and testers involved, requirements are shifted to a specialized team. Often an entire division is tasked with searching out and documenting requirements. These requirements are compressed and converted into documents that can be shared with teams of developers to implement, and teams of testers to verify. Each team can become specialists and do their own job at peak efficiency. Unfortunately for most enterprises, this model doesn't work well.
It's turned out that requirements are difficult to capture in documents. Many companies and teams have tried to use various types of documents, spreadsheets, and other tools. Many dollars have been spent trying to capture this lightning in a bottle, but so far everyone has been frustrated. The best results anyone has achieved is a sad acceptance that all requirements are bad, and a plan to rebuild most of the features two or three times until the customers are happy. Or until the customers are worn down enough to accept what's been produced.
We can do better, but it requires a different point of view. Let's start with Tony Brill's excellent battleship example.
The game of battleship was once a staple of American homes. Kids put up a small divider so their opponent can't see their board, then arrange their fleet of ships. The kids then took turns guessing (or "shooting at") coordinates to locate the "enemy" fleet. Once you get a hit, you can zero in your fire until your opponent cries out "You sank my battleship!" I spend more than a few hours trying to best my brother and friends, but there's an excellent analogy to our software efforts still hidden in this game.
There are two ways to play this game. You can play to be efficient or you can play to be effective.
This first, and most efficient, way to play this game is to not wait on your opponent. Come up with a strategy and "fire" all your shots. Place every peg you have, then find out if you placed them in the right spots. This strategy minimizes the amount of time spent playing the game. It's very efficient, just not very effective.
The second strategy is more effective, but can be seen as wasteful. It's not remotely efficient. Place your shot, and then ask your opponent if you hit the mark. No? Then place your next shot somewhere else. Yes? Then focus all your resources in that area. You'll soon sink any battleship you locate with this strategy.
Why is the second strategy seen as inefficient? It takes more time and is labor intensive. You'll spend less time (and less salary dollars!) by simply getting all the work done in a single pass. There's a great deal of comfort to the scheduling manager who can see a gate marked on a calendar, and who knows that requirements will be "done" on that date. History tells us the "completed" requirements aren't going to be very effective, but they're done.
The second strategy isn't efficient, but it sure is effective! Those labor-intensive checkpoints slow us down, but the feedback is invaluable. Strategy two doesn't guaranty a win, but it gives you a fighting chance.
How does this relate to requirements?
It's much more efficient to batch up the entire division's work and get it all defined in a single pass, but it's not very effective. Your requirements team hasn't gotten any feedback. The technical teams haven't seen anything. Are the requirements written in a format they understand? Is vital context missing? Can they implement them in a way that's acceptable to the customer? Who knows… but we're making great time!
The second strategy is slower, but more effective. Unless you measure all the rework an "efficient" strategy incurs. Then you'll find the "slower" approach both faster and more effective.
The second strategy focuses on a tighter feedback loop with smaller slices of work. Have the requirements team complete a small amount of work, then pass that work over to the technical teams. Do the developers understand it? Can the testers verify it? Find out. First have discussions, but then have them implement the first set of features. Bring the running code back to the requirements team. Was everyone speaking the same language when they talked about the report or the preferences pane? No? Then let's adjust that misunderstanding before moving forward. Let's give the requirements team a chance to get better at writing effective requirements before they spend months writing them!
(And we're ignoring the efficiencies found when the development team is only a few weeks behind the requirements team… that's pretty amazing as well.)
If you want to get really crazy with this idea, you'll include developers and testers on the requirement teams, but that's a topic for another day.
A client recently told me that requirements teams are like quarterbacks who run all over the (American) football field throwing passes. The QB thinks he's throwing great passes. He'll tell you how good the passes are if you ask him. But the person who can really judge the pass is the receiver. The QB might throw a great pass, but let's see if the receiver can catch it. That's the judge of the pass. The best pass in the world is useless if it can't be caught.
The same is true of requirements. The best requirements are those the team can implement and verify. It sounds like slowing down to get that interactive verification is inefficient, but it's not. Taking the time to aim the gun before firing slows down the shot, but ensures the target is hit.