The Resistance Against Requirements Specifications
Martin Fowler re-posted this article from 2004:
Tests are always going to be incomplete, so they always have to be backed up with other mechanisms. Being the twisted mind that I am, I actually see this as a plus. Since it’s clear that Specification By Example isn’t enough, it’s clear that you need to do more to ensure that everything is properly communicated. One of the most dangerous things about a traditional requirements specification is when people think that once they’ve written it they are done communicating.
- Documenting the behavior of software involves effort and is not free. Like other activities in software development, you have to mark time out to do it. Otherwise it will not get done.
- If you allocate this activity to someone who is already assigned other tasks (coding, testing, etc.), chances are they will consider it a lower priority than their other tasks, unless you specifically ask them not to do their regular tasks and solely work on the documentation task.
- But when you do that, those persons still think they are not getting “work” done. There is no visible output apart from a documentation file, and there is less progress on functionality released to customers, even though there may be better quality.
- It helps to assign a dedicated person to work on the requirements, so that the programmers and testers can go back to doing what they are best at. A technical writer or a business analyst can do a faster and higher quality job.
- Unfortunately, this raises the cost of software development and may not work for everyone.
The last point is something that I find some people unwilling to accept. It is better to BOTH write good specifications AND write good tests, because they will cover most gaps in your understanding of the requirements. You could write specifications in different formats to target different audiences (say more graphical for customers and more structured for your programmers). You could write manual acceptance test cases, automated unit tests, performance tests and so on. They are all useful at the margins.
But everything costs time, money and effort. If you cannot afford it, that is fine. You or your customers may have to live with a little less quality. In smaller teams, this is the reality. For example, a small team of 2 developers will not have the time (and money) to thoroughly test every aspect of their web application on all possible browsers (and different versions) on different operating systems and hardware. But you have to accept that when you do so, there comes the risk of bugs in the field. Some people find this too real to handle. Instead, they want to have (keep?) their cake and eat it too. They want to get rid of something without accepting that there is a cost to doing so.
In general, in software development, more money available and spent wisely on people, tools, infrastructure and processes = higher quality software delivered at a faster pace. People are not always wise with money and you can gain some efficiency with intelligence, hard work, perseverance, and discipline, but not beyond a point. If some reasonably smart software company has hundred times the money you have, chances are high that you are going to lose.