The Legacy Code Retreat
Join the DZone community and get the full member experience.
Join For FreeLast Saturday, I was a coach at the Legacy Code Retreat Milan 2013 organized by @gabrielelana, along with @filippo and @andreafrancia. Here's a recap of the goals of the event and of the experience.
Why?
A Code Retreat is a day dedicated to deliberate practice, where programmers reunite and work in pair programming on a single problems for each iteration. Each iteration is composed of a coding (or designing) section of 45' and of a retrospective held by the whole group.
The focus of Code Retreats is to improve a particular skill in a setting with no pressure and deadlines; all code is deleted at the end of each iteration to pass the message that it is not the end result of running that is important (getting to the same place where you started from) but the path taken and the training that results.
A Legacy Code Retreat is a particular kind of Code Retreat where legacy code is provided in the form of a source code repository containing a version of the Trivial Pursuit game (available for multiple programming languages). In our implementation the goal was different for each iteration:
- 1st iteration: produce a massive end-to-end automated test for the code, called Golden Master. This is strongly suggested by the existing material on the Legacy Code Retreat: you can only change code that is covered by automated tests.
- 2nd iteration: make it easy to add a new category of questions.
- 3rd iteration: add unit tests for the roll() function checking its output and final state.
- 4th iteration: find all the code smells, and fix 3 of them.
- 5th iteration: remove all duplication.
- 6th iteration: make the introduction of different penalty rules a one-line change (an Open/Closed Principle kata).
Here is the presentation by Gabriele containing the introduction to the day and the goals.
The golden master
The first iteration is special, and serve as a setup for the rest of the day; the result of this iteration is not (at least conceptually) thrown away but instead kept and run as an end2end test to find out any regression.
The chicken-and-egg problem of legacy code is always the same: it's difficult to test for unclear dependencies and global state, but you can't change it to simplify the job until you have some tests that ensure you're not breaking anything. Both testing and refactoring require each other.
In fact, the goal of the first iteration is to produce such an end2end test without modifying the code in the repository: you could say "just add new source files" or "only use safe refactorings" for the languages that allow them. The code itself has no particular global state (such as database tables), unless for the fact that it calls random() functions to generate the result of dice throwing.
This means executing the code multiple times always gives different results:
"Chet is the current player","They have rolled a 6","Chet's new location is 6",... "Chet is the current player","They have rolled a 4","Chet's new location is 4",...
The challenge of producing the golden master is to provide an initialization to the global state of the random number generator so that executions of the game can be repeated. Once you have 100 iterations with different initializations, you can store them and repeat the same process to find out any different in the output of the game. Any difference would signal that refactoring the legacy code has broken it and `git reset` or your favorite undo must be used.
As to keep iterations always independent, at the end of the first one we provided a branch golden-master on the repository to all pairs. In this way even who didn't reach the goal (which is normal given the short time frame) or have a rough solution could use a tuned test case for the rest of the day; the presence of more golden masters allows also pairs to change at each iteration, and the components to switch programming languages.
How it went?
The event was sold out and the 30 people showed up (fortunately an even number so as to make pair programming easier). As coaches, we moved between pairs trying not to interrupt them if they were focused but offering help to the stuck pairs and to the ones wandering off the objective.
The final retrospective brought out several goods:
- good format: each iteration is almost independent.
- Clearly defined goals.
- Variety of languages and people.
- Location and food (Talent Garden in Milan and breakfast offered by XPeppers).
And several bads too, to resolve for the next editions:
- no theoretical introduction on how to work with the legacy code.
- Difficulties in using Extract Class, with respect to Extract Method and Extract Field which are local changes.
- Difficulties in introducing unit-level tests.
Opinions expressed by DZone contributors are their own.
Comments