Principles for Creating Maintainable and Evolvable Tests
Join the DZone community and get the full member experience.Join For Free
having [automated] unit/integration/functional/… tests is great but it is too easy for them to become a hindrance, making any change to the system painful and slow – up to the point where you throw them away. how to avoid this curse of rigid tests, too brittle, too intertwined, too coupled to the implementation details? surely following the principles of clean code not only for production code but also for tests will help but is it enough? no, it is not. based on a discussion on our recent course with kent beck, i think that the following three principles below are important to have decoupled, easy to evolve tests:
- tests tell a story
- true unit tests + decoupled higher-level integration tests (-> mike cohn’s layers of the test automation pyramid )
- more functional composition of the processing
( disclaimer: all good ideas here come from kent beck and my course-mates. all misconceptions are genuinely mine. )
1. tests tell a story
if you think about your tests as telling a story – single test methods telling simple stories about small features, whole test classes about particular larger-scale features, the whole test suite about your software then you end up with better, more decoupled, and easier to understand tests. this implies that when somebody reads the test, s/he feels as if reading a story.
this actually corresponds nicely to what gojko adzic is teaching about acceptance tests. i.e. that they should be a “ living documentation ” of the system for the use of the business users, and this is their by far greatest benefit (and not, as it is usually believed, that you have a set of regression tests or that you can show that the system works).
why should you write your tests as stories?
- it forces you to concentrate on telling what the code is supposed to do as opposed to how it should do it. your tests will be therefore more decoupled from the implementation and thus more maintainable and more likely to discover a defect in the way how a requirement is implemented.
- the story-telling approach forces you to abstract from unimportant details, f. ex. by creating an abstraction layer between the test and the low-level details of what’s being tested (the layer may consist of something as simple as a helper method or something more elaborate)
- the tests will be easier to understand. as we all know, code is read many more times than written so understandability is very important. if your tests are easy to read and grasp, they will serve as a very good documentation of the code under test. future generations of programmers working on it will love you.
- if you find it difficult to write the test in a story-like manner then something is likely wrong with your api and you should change it.
how should you write tests to read as stories?
- not: should_return_20_if_sk_or_ba – this tells me nothing: what is 20? what is ba, sk? (for the curious: airlines, namely sas and british airways)
- yes: should_give_discount_for_preferred_airlines – this tells me what and why is done
proper level of abstraction – as mentioned above, to get the true
benefit out of your tests and make them ready to live long you need to
keep them on the proper level of abstraction
- move unimportant details away from the test, into helper methods, objects (such as an objectmother ) or setup. don’t obscure the main logic of the test. (of course, if you need extensive setup or if you have a lot of low-level code in your test, something is rather wrong with your test approach, the design of the tested code, or both. fix it first.)
- for example some people claim that having a loop in a test is too low-level – if you need it then perhaps your api is insufficient, its users might need to loop too, so why not just provide a suitable method that would do it for you?
- by the way, nobody says that it is easy to write tests on the right level of abstraction. but it pays off.
a (rather conceived) example (click to expand):
- minimal coupling to implementation details – the more your test code resembles your implementation code the less useful it is. the whole point of tests is doing the same thing differently (and usually much more simply) then your implementation so that you are more likely to catch bugs in it. copying & pasting code from the implementation and adjusting it slightly is therefore a really bad thing to do. if you can’t do things more simply (are sure you can’t?!) try to do them at least differently.
- mostly public api usage – to keep your tests as decoupled and maintainable as possible, not compromising the evolvability of your code, it seems reasonable to me to try to stick to the public api of the tested class as much as possible. if you want to check some low-level implementation details to be sure you’ve done them right, create another test case for it so that when the implementation changes, you can just throw it away while the test exercising the “story” implemented by the code will be able to live happily on. (see never mix public and private unit tests! )
- testcases per fixture (a separate test class for each of different setup needs) – i hope you already know that it is normal to have more test cases for one business class. actually junit forces you to it e.g. by requiring the parametrized test’s data to be on the class level. if you group tests that require the same setup, you can move the setup code to the @before method and thus keep the tests themselves much simpler and easier to read. of course it is always little difficult to find the right balance between the number of test cases, fixtures, and test methods.
(sidenote: also the xunit patterns book states tests as documentation as one of the main goals of testing.)
2. true unit tests + decoupled higher-level integration tests
(-> mike cohn’s layers of the test automation pyramid )
for discussing this junit perhaps isn’t a good example – it is very special because it uses itself to test its behavior and the tests must fail even if there is a defect in the test framework – but still we can perhaps learn something from it. what surprised me a lot is that it has a high number of integration tests*, around 35%. the recipe for long-lived, evolvable tests seems to be to write:
- true unit tests , i.e. tests that only check one class and don’t depend on its collaboration with other classes. such a test is not affected by changes to those collaborations (that are covered by the integration tests) and if the tested class itself changes, the test is likely to either keep up with the change – or, if it is a large-scale change, you just throw it away (likely together with the class being tested) and write a new test for the new design. (i don’t claim this is easier to implement and it also requires a particular way of coding and structuring the software [see #3 below] but it certainly seems to be a way to go.)
that check the collaboration of
multiple objects – not necessarily the whole application or a subsystem,
just about any piece of definable functionality. again, the integration
test should be telling a story about what the module does – and this
story itself is unlikely to change even if the way how the group of
objects internally implement it evolves. it is thus crucial to test on
the right level where you aren’t bothered too much by concrete
implementation details yet you aren’t too far from the code you want to
- kent mentioned a nice example of how they refactored the way junit 4.5 manages and executes individual phases of testing by replacing nested method calls with command objects (which made it possible to introduce @rule) – thanks to the integration test being on the level “i give some input to the execution subsystem and expect a particular output”, they still worked. if they were on a lower level and depended on the fact that there was a series of nested method calls, the refactoring would be much harder.
*) what is an integration tests? according to one of the possible definitions, unit test is a test where seeing the failure message you can immediately pinpoint the piece of code or even line where the problem originated. contrary to that, if an integration test fails, you usually can’t say why and have to dig into it or perhaps debug it a little.
3. more functional composition of the processing (i.e. kill the mocks!)
do you need a mocking framework to write tests? then you might be doing it in a suboptimal way. (disclaimer: there surely are different yet equally good approaches to nearly anything .) when kent was explaining the way he composes his programs he draw a picture similar to this one:
what is interesting about it? it is not the typical network of objects where one object does something and calls another one to *continue* with the processing or to do a part of it. it is composed of two types of objects: workers that receive an input and produce an output, and integrators that delegate the work to individual workers and pass it from one to another with minimal logic of their own. the workers are very functional and thus easy to test with true unit tests (and as said, if a different implementation is required, you just may throw the worker with its test away and create a new one) while the integrators should be as simple as possible (so the likelihood of a defect is smaller) and are covered by integration tests. (everything fits nicely together, doesn’t it?)
given that most of bigger business software systems live for quite a number of years, it’s essential to write tests in such a way that they enhance and not limit the evolvability of the system. it is not easy but we must make efforts towards that.
to succeed we must structure our code and tests in a particular way and approach our test methods, test classes, and test suites as telling stories about what the code under test does.
- uncle bob: manual mocking: resisting the invasion of dots and parentheses – uncle bob explains why he usually uses hand-coded stubs, talks about the difference between testing choreography and testing behavior, and when a mocking framework is really necessary
Opinions expressed by DZone contributors are their own.