Test-Driven Emergent Design vs. Analysis
Join the DZone community and get the full member experience.Join For Free
- 1. writing a failing end-to-end test.
- 1.1 Writing a failing unit test 1.
- 1.1 Make test 1 pass.
- 1.N Writing a failing unit test N.
- 1.N Make test N pass.
- 1. Make the end-to-end test pass, usually by changing the configuration of the object graph. Restart from the next feature.
All phases are intertwined with constant refactoring at the small scale (for the new classes) and at the large scale in case the architecture is not supporting the new feature.
By the way, I learned from a talk of Matteo Vaccari a metaphor for the emergent design that TDD produces: improvising with a musical instrument. Kent Beck and other champions can do it well for almost any problem, but it has taken many years for them to build this capacity.
Continuing with the metaphor, I would add that many of us probably know only a limited number of songs and chords to play. Part of our deliberate practice is to learn new chords, new instruments, and to exercise the ones we already know so as to execute them perfectly when required (I will now stop the musical references as I'm not a musician).
In fact, TDD is the fastest method I know to build an application you *already know* how to build. For example, in my case it is a web application with a chosen framework X: after you have built some other applications you already have a set of choices that you may borrow from one project to the other.
- What I already know when stepping on such an application are the possible layers (or how you call horizontal divisions) and modules (vertical divisions); which libraries we can use and for what purpose (ORMs like Doctrine); patterns to rely on like Ports and Adapters, Facade, Data Mapper.
- There are also many design choices that I discover with TDD instead: domain logic (which is actually very important in many cases), the organizations of the object graph in a single layer, which pattern fits a scenario (Repository, Value Object, Strategy, Double Dispatch).
But I'm not probably capable to build a whole application only by emergent design: it would be a lot more difficult. Actually, I have the feeling the main issue is the speed of the approach, as in those cases I found myself having to refactor aggressively to exit from dead ends.
Good old analysis
What is the alternative to totally emergent design? Analysis performed before coding. The problem with Big Design Up Front? McConnell writes:
In ten years the pendulum has swung from ‘design everything’ to ‘design nothing.’ But the alternative to BDUF [Big Design Up Front] isn’t no design up front, it’s a Little Design Up Front (LDUF) or Enough Design Up Front (ENUF).
while Kent Beck (quoting him):
The alternative to designing before implementing is designing after implementing. Some design up-front is necessary, but just enough to get the initial implementation.
And Uncle Bob:
There should be no doubt that BDUF is harmful. It makes no sense at all for designers and architects to spend month after month spinning system designs based on a daisy-chain of untested hypotheses. To paraphrase John Gall: Complex systems designed from scratch never work.
However, there are architectural issues that need to be resolved up front. There are design decisions that must be made early. It is possible to code yourself into a very nasty cul-de-sac that you might avoid with a little forethought.
Notice the emphasis on size here. Size matters! ‘B’ is bad, but ‘L’ is good. Indeed, LDUF is absolutely essential.
Indeed the goal would still be not writing a single line of code if there isn't a test that calls for it. However, it's possible to explore design choices at an higher level of abstraction; the choices explored on a board and on paper can be refuted by code and by what we learn about the problem, but let's make these assumptions:
- Decisions are not normative, and can be dispelled.
- Many different solutions for a problem are explored on the board. It's rare that none of them will fit in the code.
- The board-code-board cycle is sufficiently short. This avoids a long session of design that have to be trashed after ten lines of code: the suggestion I heard at the XPUG in Milan was to timebox the board sessions.
Then I think a timeboxed, non-prescriptive design session held before jumping into code for several hours can adds value more than the time it costs; especially if the problem or the domain is not familiar, as everyone of us can build the usual framework CRUD application with hands behind our back.
Which are the most effective techniques for non-coding design? A whiteboard is one of the most common physical tools when more than two developers are involved. But what techniques and tools should we also try?
First of all, an (informal) specification phase is included in XP and other Agile methods: user stories are written down, and iteration and release planning take place before coding. These are phases that work on the problem space; nevertheless they are a prerequisite. Let's explore the techniques for exploring the solution space instead.
Top-down decomposition? That's a joke. Control flow is most efficiently discovered by coding in my experience, while decomposing a procedure in smaller ones does little to improve modularity (Parnas says).
CRC cards are a tool for quickly modelling objects and their relationships. After experimenting with them at the local XPUG, I discovered that I was using their process without physically using them (assigning responsibilities to object I had in mind or drawn on paper.) CRC cards are not as precise as unit tests or any other form of code, but they are faster to use as any new pattern or can be explored quickly by writing down a few phrases on paper. Moreover, you would more easily grow attached to code than to a design on paper, due to the time it took to build it.
By the way, there is an entire genre of books (unfortunately disappearing) about object-oriented analysis and design, like Wirfs-Brock's volumes on Responsibility-Driven Design, Evans's DDD book, and West's Object Thiking.
The feedback of code
This discussion about other design methods doesn't mean that we don't discover anything by translating design hypothesis into code.
Verifying the feasibility of a design that looks good on paper is the first step; there are class members that we may have overlooked but result necessary. The design you can expect by a whiteboard session is rough, and there would be many decisions to take while test-driving:
- method names and exact signatures;
- minor objects;
- new objects to extract, in particular Value Objects;
- refactoring of objects references from fields to parameters, and the opposite.
- forms of duplication to eliminate that are only evident once developed into code.
I learned in TDD by Example by Beck my main workflow: get to something that works (in this case with a whiteboard-based design) and then refactor; this second step is really what makes a group of objects shine with respect to the original. Another option is to refactoring first, and introduce a new functionality aided by the refactoring: this choice requires a design to refactor towards.
If you do not know which objects you want, you cannot write a test. Even for writing the first end-to-end test or a few new unit ones you have to know what to instantiate and to call.
Tests let you experiment solutions, so iterating over possible whiteboard designs seem good (and Uberto tells me he does it with mocks: I can understand how that simplifies as much as possible the noise and lets the test describe the interaction between objects while cutting away fixtures and other concerns.)
I have not reached the pure outside-in approach as an option yet, where the new unit tests to write are to develop objects that fulfill the role of existing mocks (essentially starting in the outside objects and implementing all the way down).
I would like to try pure emergent design, but I can do it at a small scale (small application or layer of a single application), while I find more efficient for my abilities to do some little analysis and experiments to define a big picture first.
But it's a big picture: a 10,000 meters view where you can see the boundaries of cities and the rivers, but not many details. Moreover, it's based on lots of scaffolding: once it is in place and I learn more of the problem, I can move many responsibilities around. Refactoring breakthroughs as described by Evans happen as you refine the design from the original analysis to less duplication and more concepts.
Opinions expressed by DZone contributors are their own.