Make Your Test Fail First
Make Your Test Fail First
The first step of test-driven development can seem unnecessary or superfluous, but learn here why it is always important to test the test.
Join the DZone community and get the full member experience.Join For Free
[Latest Guide] Ship faster because you know more, not because you are rushing. Get actionable insights from 7 million commits and 85,000+ software engineers, to increase your team's velocity. Brought to you in partnership with GitPrime.
The test-first development cycle means that first we write a failing test and prove that it fails by running it and seeing the red bar. Then we implement the test so that it passes and we see the green bar. Finally, we refactor the code and the test for quality and maintainability.
That's the test-first development cycle. It's pretty simple and straightforward but sometimes people try to short-circuit that cycle. They think that the goal of doing test-driven development (TDD) is to get to the green bar, but this is actually not true. The goal, initially, of doing TDD is to turn the red bar green. The key distinction here is that before we get to the green bar we have to have the red bar.
Always starting with a failing test can seem like busywork when we're first learning how to do TDD. After all, we know the test is going to fail initially because we haven't yet written the code to make it pass. So what's the purpose of making the test fail and seeing it fail?
Think about making the test fail and seeing it fail as your test of the test. Often, people ask me if your tests verify your code then what verifies your tests and the answer is actually twofold. The test tests the code and the code tests the test. This is very much like the kind of confirmation we get when we do double-book entry accounting where each credit is offset by a debit. When code and tests agree on the same result then it's pretty likely that they are in alignment.
But there's another test of the test that we do when doing TDD. It's a very simple test but it's an important one that we don't want to overlook. We verify that the test can actually fail by making it fail first and then we prove that the test passes for the reason that we intend it to by implementing the behavior and seeing the test turn green.
Every once in a while I'll write a test that I will expect will fail but when I run it, it passes. Often, this means that the implementation that I was about to create already exists in the system. That's good news because it means that I don't have to implement that behavior myself. It could also mean that I wrote a bad test and if that's the case I want to know that right up front. I don't want to be putting bad tests in my code because a test that cannot fail is worse than no test at all. It's a lie. I don't like putting lies in my code.
So, always start by writing a failing test first and watch it fail. It may seem like busywork at first but the first time it surprises you and you get the green bar when you were expecting the red bar, you'll find that it was worth the effort.
I love watching Uncle Bob Martin write code and fortunately, he has several videos where he demonstrates doing TDD for various coding activities. One of the habits that he is in when doing TDD that I really love is that before he clicks to run his tests, which he does quite frequently, he says what he expects to see happen. So, he'll make some changes to the code and then before he clicks to run his test he'll say, "I expect this to fail," or, "I expect this to pass." He sets an expectation for himself and whoever else is coding with him.
Let's face it, it only takes a moment to see a test fail and the benefit of knowing a test is not doing what we expect far outweighs the extra effort it takes.
Published at DZone with permission of David Bernstein , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.