Israel Gat, director of Cutter Consortium's Agile Product & Project Management practice, shares his thoughts on how technical debt can affect performance of development teams, the time when it is (or not) appropriate reducing it, and how it could be used as a tool for investment decision-making.
What would be the best definition you could give to explain technical debt?
I love Jean-Louis Letouzey's definition: “The sum of remediation costs for all non-compliances.” Precise; concise; actionable. In terms of explaining technical debt as a term of art, I usually discuss doing the system right versus doing the right system. Technical debt is all about doing the system right. But, it does not tell you whether you built the right system.
In various workshops I use the “scratch your left ear” mini-exercise as a metaphor for the technical debt metaphor. I ask participants to first scratch the left ear with the left hand, then with the right hand. Having scratched their ears, I literally tell them “the awkwardness you felt when switching hands is your technical debt.”
What are the limits, misunderstandings or deviances of the metaphor that you have frequently encountered among dev teams that you have met?
Understanding of the relationship between technical debt and bugs often takes some time to sink in. Uninitiated folks tend to think about it as “double dip,” particularly when they see technical debt stories alongside bug fixing stories in the Agile backlog. The best way I found to explain the difference is by discussing technical debt as a leading indicator, bugs as a trailing indicator.
Another relationship that can be tricky is between technical debt and productivity. I see the two as counterbalancing each other. If my productivity is terrific but the technical debt is horrible, I did not accomplish much. Likewise, if my technical debt is respectable but my productivity is low, again I did not accomplish much. To have meaningful accomplishments, I need to attain high productivity subject to an acceptable level of technical debt. The two (technical debt and productivity) are like twins.
What kind of symptoms does technical debt generate in code/teams/management?
Could you give some examples based on a true story? Different symptoms manifest themselves at different levels, as follows:
1. Code: The thing that scares me most is dealing with code for which the Error Feedback Ratio(1) is higher than ~0.3. When the technical debt accrues to the level that the Error Feedback Ratio is in this vicinity, the product team could easily lose control of the code. The team has for most practical purposes lost the ability to do any new function/feature work as everyone is fully “mortgaged” to urgent bug fixing. This is not a pretty picture.
2. Teams: A major concern of various product teams is that the team will be managed in a heavy-handed manner based on their technical debt data. While this is quite a natural response, my recommendation to such teams is to come to a mutual agreement with the business on how code quality will be assessed, or measured, or both. As far as I am concerned, if you prefer bug removal statistics to technical debt, so be it. Or, if you prefer a two-tier code inspection (i.e. first by colleagues, thence by superiors) as the basis for assessing quality, this is well and good in my book. But, you must have some well defined quality metric(s) as a boundary object between the business and development. Boundary objects based on technical debt levels are usually very versatile.
3. Management: I often witness an OMG reaction first time I come with the results of the technical debt. For example, uninitiated management might find it hard to believe that their level of technical debt exceeds their cost (of developing the software). I often have to preempt an overly normative reaction by stressing “it is what it is,” emphasizing that the important thing is how we will reduce the technical debt in the coming six months (or some such period), not what the absolute level of technical debt is.
At Cutter Consortium, you provide companies with Technical Due Diligence services. What methodology do you use?
In Cutter Technical Due Diligence engagements, we use technical debt analysis as the means to validate the qualitative data we get through interviews with the CEO, CTO and CMO. If the story we heard is inconsistent with the “story” the code tells, we advise the venture capitalist to check further before making the investment decision.
The classical example here is level of code duplication. A high level of code duplication invariably reflects lack of alignment between biz and dev. The capacity to deliver what the business requires is lacking, hence folks in the trenches hastily revert to code duplication. In other words, the product team takes technical debt in order to meet delivery deadlines.
When biz/dev alignment is poor, we often experience a vicious cycle: the development manager, being pressured by the business, takes some technical debt, but typically can’t find the time to pay it back. Hence, the overall level of technical debt grows, impeding velocity. As a result the business further increases its pressure, and the development manager takes even more debt. It is an extremely difficult cycle to break.
In various engagements in which we witnessed this kind of vicious cycle, we advised the venture capitalist to be ultra cautious in his/her investment decision. Metaphorically speaking, the start-up lives on credit it might not be able to pay back.
What questions should we ask ourselves before bestially fighting (or not) code quality defects?
The primary question to be asked is economical. Will the investment in improving quality be superior to alternative forms of investment? For example, you could improve your brand through better quality; or, you could take the money (slated for investment in technical debt reduction) and use it to improve your brand by investing in a good marketing attribution system.
The key to successful reduction of technical debt in large scale code bases is selectivity. If your unit test coverage on 10M lines of code is zero, it is unlikely you will be able to “stop the world” until you developed comprehensive unit test coverage for all this code. Rather, you should by thoughtful about your strategy to improve test coverage over time. For example, you could put in place a policy mandating that whenever you fix a bug, the corresponding unit test will be developed. By adhering to such policy, little by little you will see your coverage growing.
In parallel with reducing technical debt, you need to be thoughtful about preventing technical debt, particularly prior to releasing code. For example, if you are using Agile method, much of the effectiveness of your testing depends on “fix early and often.” If the resultant level of technical debt is still high toward the release deadline, you need to look into whether you are really being Agile or whether you are “playing Agile.”
(1) The Error Feedback Ratio is the number of bugs I (unintentionally) inject while fixing 10 bugs, divided by 10. For example, an Error Feedback Ratio of 0.3 indicates that I injected 3 new bugs for every 10 bugs I fixed (3/10=0.3).
Note of the author: This interview was previously posted on TechDeb.org a collaborative benchmark dashboard dedicated to Technical Debt and Software Quality.