Testing the Untestable and Other Anti-Patterns
The productive path to establishing and maintaining effective test automation is not easy. In this post, explore well-intentioned yet harmful anti-patterns.
Join the DZone community and get the full member experience.
Join For FreeBooks on bad programming habits take up a fraction of the shelf space dedicated to best practices. We know what good habits are – or we pay convincing lip service to them – but we lack the discipline to prevent falling into bad habits. Especially when writing test code, it is easy for good intentions to turn into bad habits, which will be the focus of this article. But first, let’s get the definitions right.
An anti-pattern isn’t simply the absence of any structured approach, which would amount to no preparation, no plan, no automated tests and just hacking the shortest line from brainwave to source code. This chaotic approach is more like non-pattern programming. Anti-patterns are still patterns, just unproductive ones, according to the official definition. The approach must be structured and repeatable, even when it is counter-productive. Secondly, a more effective, documented, and proven solution to the same problem must be available.
Many (in)famous anti-patterns consistently flout one or more good practices. Spaghetti code and the god object testify to someone’s ignorance or disdain of the principles of loose coupling and cohesion, respectively. Education can fix that. More dangerous, however, are the folks who never fell out of love with over-engineering since the day they read about the Gang of Four because doing too much of a good thing is the anti-pattern that rules them all. It’s much harder to fight because it doesn’t feel like you’re doing anything wrong.
Drawn To Complexity Like Moths to a Flame – With Similar Results
In the same as you can overdose on almost anything that is beneficial in small doses, you can overdo any programming best practice. I don’t know many universally great practices, only suitable and less suitable solutions to the programming challenge at hand. It always depends. Yet developers remain drawn to complexity like moths to a flame, with similar results.
The usefulness of SOLID design principles has a sweet spot, after which it’s time to stop. It doesn’t follow an upward curve where more is always better. Extreme dedication to the single responsibility principle gives you an explosion of specialized boilerplate classes that do next to nothing, leaving you clueless as to how they work together. The open/closed principle makes sense if you’re maintaining a public API, but for a work-in-progress, it’s better to augment existing classes than create an overly complex inheritance tree. Dependency inversion? Of course, but you don’t need an interface if there will only ever be one private implementation, and you don’t always need Spring Boot to create an instance of it.
The extreme fringes on opposite sides of the political spectrum have more in common with each other than they have with the reasonable middle. Likewise, no pattern at all gives you the same unreadable mess as well-intentioned over-engineering, only a lot quicker and cheaper. Cohesion gone mad gives you FizzBuzzEnterpriseEdition while too little of it gives you the god object.
Let's turn over, then, to test code and the anti-patterns that can turn the efforts into an expensive sinkhole. Unclarity about the purpose of testing is already a massive anti-pattern before a single line of code is written. It’s expressed in the misguided notion that any testing is better than no testing. It isn’t. Playing foosball is better than ineffectual testing because the first is better for team morale and doesn’t lull you into a false sense of security. You must be clear on why you write tests in the first place.
First Anti-Pattern: Unaware of the Why
Well, the purpose of writing tests is to bring Sonar coverage up to 80%, isn’t it? I’m not being entirely sarcastic. Have you never inherited a large base of untested legacy that has been working fine for years but is languishing at an embarrassing 15% test coverage? Now suddenly the powers that be decide to tighten the quality metrics. You can’t deploy unless coverage is raised by 65 percentage points, so the team spends several iterations writing unit tests like mad. It’s a perverse incentive, but it happens all the time: catch-up testing. Here are three reasons that hopefully make more sense.
First, tests should validate specifications. They verify what the code is supposed to do, which comes down to producing the output that the stakeholders asked for. A developer who isn’t clear on requirements can only write a test that confirms what the code already does, and only from inspecting the source code. Extremely uncritical developers will write a test confirming that two times four equals ten because that’s what the (buggy) code returns. This is what can happen when you rush to improve coverage on an inherited code base and don’t take the time to fully understand the what and why.
Secondly, tests must facilitate clean coding; never obstruct it. Only clean code keeps maintenance costs down, gets new team members quickly up to speed, and mitigates the risk of introducing bugs. Developing a clean codebase is a highly iterative process where new insights lead to improvements. That means constant refactoring. As the software grows, it’s fine to change your mind about implementation details, but you can only improve code comfortably that way if you minimize the risk that your changes break existing functionality. Good unit tests warn you immediately when you introduce a regression, but not if they’re slow or incomplete.
Thirdly, tests can serve as a source of documentation for the development team. No matter how clean your code, complex business logic is rarely self-explanatory if all you have is the code. Descriptive scenarios with meaningful data and illustrative assertions show the relevant input permutations much clearer than any wiki can. And they’re always up to date.
Second Anti-Pattern: London School Orthodoxy
I thank Vladimir Khorikov for pointing out the distinction between the London versus the classical unit test approach. I used to be a Londoner, but now I’m convinced that unit tests should primarily target public APIs. Only this way can you optimize the encapsulated innards without constantly having to update the tests. Test suites that get in the way of refactoring are often tightly coupled to implementation details.
As long as you can get sufficient execution speed and coverage, I find no compelling reason for a rigid one-to-one mapping between source classes and corresponding test classes. Such an approach forces you to emulate every external dependency’s behavior with a mocking framework. This is expensive to set up and positively soul-crushing if the classes under test have very little salient business logic. A case in point:
@RestController
public class FriendsController {
@AutoWired
FriendService friendService;
@AutoWired
FriendMapper friendMapper;
@GetMapping(“/api/v1/friends”)
public List<FriendDto> getAll(){
return friendMapper.map(friendService.retrieveAll());
}
}
This common Controller/Service layered architecture makes perfect sense: cohesion and loose coupling are taken care of. The Controller maps the network requests, (de)serializes input/output, and handles authorization. It delegates to the Service layer, which is where all the exciting business logic normally takes place. CRUD operations are performed through an abstraction to the database layer, injected in the service layer.
Not much seems to go on in this simple example, but that’s because the framework does the heavy lifting. If you leave Spring out of the equation, there is precious little to test, especially when you add advanced features like caching and repositories generated from interfaces. Boilerplates and configurations do not need unit tests. And yet I keep seeing things like this:
@ExtendWith(MockitoExtension.class)
public class FriendsControllerTest {
@Mock FriendService friendService;
@Mock FriendMapper friendMapper;
@InjectMocks FriendController controller;
@Test
void retrieve_friends(){
//arrange
var friendEntities = List.of(new Friend(“Jenny));
var friendDtos = List.of(new FriendDto(“Jenny”));
Mockito.doReturn(friendEntities).when(friendService).findAll();
Mockito.doReturn(friendDtos).when(friendMapper).map(eq(friendEntities));
//act
var friends = controller.findAll();
//assert
Assertions.assertThat(friends).hasSize(1);
Assertions.assertThat(friends.get(0).getName()).isEqualTo(“Jenny”);
}
}
A test like this fails on three counts: it’s too much concerned with implementation details to validate specifications and it’s too simplistic to have any documentary merit. And being tightly bound to the implementation, it surely does not facilitate refactoring. Even a “Hello, World!”-level example like this takes four lines of mock setup. Add more dependencies with multiple interactions and 90% of the code (and your time!) is taken up with tedious mocks setup.
What matters most is that Spring is configured with the right settings. Only a component test that spins up the environment can verify that. If you include a test database, it can cover all three classes without any mocking, unless you need to connect to an independent service outside the component under test.
@SpringBootTest
@AutoConfigureMockMvc
class FriendsControllerTest {
@Test
@WithAnonymousUser
void get_friends(){
mockMvc.perform(get("/v1/api/friends"))
.andExpect(content().string(“[{\“name\”: \”Jenny\”}]”))
}
}
Third Anti-Pattern: Trying to Test the Untestable
The third anti-pattern I want to discuss rears its head when you try to write tests for complex business functionality without refactoring the source. Say we have a 1500-line monster class with one deceptively simple public method. Give it your outstanding debt, last year’s salary slips, and it tells you how much you’re worth.
public int getMaxLoanAmountEuros(List<SalaryReport> last12SalarySlips, List<Debt> outstandingDebt);
It’s different from the previous example in two important ways:
- The code-under-test centers on business logic and has high cyclomatic complexity, requiring many scenarios to cover all the relevant outcomes.
- The code is already there and testing is late to the party, meaning that by definition you can’t work in a test-driven approach. That of itself is an anti-pattern. We’re writing tests as an afterthought, only to validate what the code does.
The code may be very clean, with all complexity delegated to multiple short private methods and sub-classes to keep things readable. Sorry, but if there are no unit tests, it’s more likely to be a big ball of mud. Either way, we can’t reduce the essential complexity of the business case. The only way to reach full coverage of all the code under this single public method is by inventing scenarios with different input (the salary and debt information).
I have never been able to do that comfortably without serious refactoring, by delegating complex isolated portions to their own class and writing dedicated tests per class. If you’re terrified of breaking things, then changing the access level of private methods to default scope is safer. If the test is in the same package, you might write focused tests for these methods, but it’s a controversial strategy. Best avoid it. You’re breaking encapsulation, wading knee-deep in implementation details, which makes future refactoring even harder.
You have to proceed carefully if you tamper with untested production code, but there is no good alternative other than a full rewrite. Writing thorough unit tests for messy code that you inherited without the privilege to refactor is painful and ineffectual. That’s because untested code is often too cumbersome to test. Such untestable code is by definition bad code, and we should not settle for bad code. The best way to avoid that situation is to be aware of the valid reasons why we test in the first place. You will then never write tests as an afterthought.
Green Checkbox Addiction
The productive path to establishing and maintaining effective test automation is not easy, but at least the good habits make more sense than the well-intentioned yet harmful anti-patterns I pointed out here. I leave you with a funny one, which you might call green checkbox addiction: the satisfaction of seeing an all-green suite, regardless of whether the tests make any sense. That's the false sense of security I mentioned earlier, which makes bad testing worse than no testing at all. It’s like the productivity geeks who create spurious tasks in their to-do lists for the dopamine spike they get when checking them off. Very human, and very unproductive.
Opinions expressed by DZone contributors are their own.
Comments