I recently sparred gently on twitter with Scott Ambler regarding an assertion that repeatedly renegotiated schedules was evidence of unethical behavior. Others on the thread equated the practice with lying.
(Full disclosure: Scott is a highly respected thought-leader in the Agile community and the Chief Methodologist at IBM Rational. And I’m jealous.)
Bernie Madoff Syndrome
When we witness a problem, we search for the cause. Often the trail seems to lead to the (mis)behavior of a few ‘bad apples’
Bad-apple theory suggests the recent near-collapse of the world-wide financial system was caused by the misdeeds of a handful of Bernie Madoff types. The solution is to punish these people and discourage imitators.
The theory is attractive because it has a simple story line. It’s morally satisfying. Cause and effect are tied neatly together, letting us off the hook from the hard work of changing the system itself.
I’m not going to argue that lying and deceit don’t sometimes play a role, but we should be wary of attributing the success of some projects to the virtues of Agile, and the failure of others to individual vice.
Many bad apples?
To find other explanations, we begin by observing that the sliding schedule problem is not uncommon. Time and software are old enemies:
“More projects go awry for lack of calendar-time than all other reasons combined.” Fred Brooks, Mythical Man Month.
While it’s possible that engineering management attracts a disproportionate share of incompetent and unethical people, it’s more likely that the fault is systemic. When a problem recurs regularly in spite of the best efforts of bright, resourceful people, we can assume it has deep roots.
Bazerman and Watkins argue that some problems recur because they fall into a kind of “sweet spot” for failure, where political interest, organizational dysfunction and cognitive limits align against them. Software blowups fit this model nicely.
Bad brains, and good ones
Dr. Frederick Frankenstein: Ah! Very good. Would you mind telling me whose brain I DID put in?
Igor: Abby someone.
Dr. Frederick Frankenstein: [pause, then] Abby someone. Abby who?
Igor: Abby… Normal.
Unfortunately, it’s not just abnormal brains that cause problems. Daniel Kahneman won a Nobel Prize in Economics for his research on biased decision-making under uncertainty. Its application in the financial markets helped others win more lucrative prizes- even without cheating.
More broadly, Kahneman and his followers studied a host of common heuristics and biases. One finding: people have a clear bias towards optimistic time estimates known as the Planning Fallacy.
“The phenomenon is not limited to commercial mega-projects… and its occurrence does not depend on deliberate deceit or untested technologies.”
In fact it’s been shown to be true with even simple household tasks. Better still, a person can be pessimistic about plans in general, e.g. “Software projects always run late” and still be too optimistic in their own planning: “I think I can finish on time.”
Ironically, more detailed planning can actually make people more optimistic. The theory is that optimism derives from a mental image of success; more detail makes for a more compelling image.
Another common cure, having people estimate their own work may also backfire. While this should increase commitment and incentives for accuracy, it goes against human nature: People tend to be optimistic regarding their own plans, but more realistic regarding the plans of others.
While the planning fallacy affects estimates generally, the heuristic called Anchoring and Adjustment undermines our attempts at revision. People often unconsciously start with a reference point (the ‘anchor’) and then ‘adjust’ from that to derive an estimate. The problem is that the adjustments are usually too small. The current scheduled completion date can be an anchor that contaminates subsequent efforts to create a viable schedule.
From Madoff to Gandhi
How powerful is this effect? People were asked to estimate how old Gandhi was when he died, but first half were asked if he was older than 9. The others were asked if he was less than 140. (Obviously his age was between the two when he died.) The first group had an average estimate of 50; the others an average of 67: a difference of 17 years due to completely irrelevant anchors.
Then there’s The Confirmation Trap. Given an assertion like “We can do that in six months,” we have a strong and harmful tendency to seek supporting evidence and stop when we find some. But the presence of supporting evidence doesn’t make a plan achievable. Plans can only be proven infeasible before hand. Searching for proof that a plan is infeasible is an uncommon project activity, to say the least.
How common are these problems? In the words of one researcher: “One of the most robust findings in the psychology of prediction is that people’s predictions tend to be optimistically biased.” If you think you’re immune, you’re suffering from a positive illusion bias. These not only harm our estimates, they lead us to think that everything will work out fine, undermining our motivation to do something before things get out of hand. Other biases have similar effects:
- Discounting the Future: We tend to avoid small costs in the present that would prevent large problems in the future.
- Status quo bias: We tend to avoid actions involving any clear harm, even when the positive benefits of the action outweigh the negatives greatly.
In short, human nature leads us astray. It leads us to underestimate the task at hand and to delay corrective action when needed. If that qualifies as unethical, then we’re all guilty at least some of the time:
“when we make mistakes, we shrug and say that we are human. As bats are batty and slugs are sluggish, our own species is synonymous with screwing up” - Kathryn Shulz- Being Wrong: Adventures in the Margin of Error.
These biases partly explain the persistence of sliding schedules. If nothing else, they should make us think twice about hammering people who miss a deadline. I plan to follow this up with posts on other factors that may play a role, but I’d rather not say exactly when I’ll be done.
(Sources on cognitive limits include Shulz, Bazerman & Watkins, Kahneman, etc, and Gilovich, etc. All are listed in the bibliography with links to their Amazon pages.
Mel Brooks’ Young Frankenstein was my source on problems arising from installing an abnormal brain in a giant, home-made creature.)