Never Trust a Passing Test
Never Trust a Passing Test
One of the lessons you learn when practicing TDD is to never trust a passing test. If you haven’t seen the test fail, are you sure it can fail?
Join the DZone community and get the full member experience.Join For Free
Download the blueprint that can take a company of any maturity level all the way up to enterprise-scale continuous delivery using a combination of Automic Release Automation, Automic’s 20+ years of business automation experience, and the proven tools and practices the company is already leveraging.
Getting used to the red-green-refactor cycle can be difficult. It’s very natural for a developer new to TDD to immediately jump into writing the production code. Even if you’ve written the test first, the natural instinct is to keep typing until the production code is finished, too. However, running the test is vital. If you don’t see the test fail, how do you know the test is valid? If you only see it pass, is it passing because of your changes or for some other reason?
For example, maybe the test itself is not correct. A mistake in the test setup could mean we’re actually exercising a different branch, one that has already been implemented. In this case, the test would already pass without writing new code. Only by running the test and seeing it unexpectedly pass can we know the test itself is wrong.
Alternatively, there could be an error in the assertions. Have you ever written
assertTrue() instead of
assertFalse() by mistake? These kind of logic errors in tests are very easy to make and the easiest way to defend against them is to ensure the test fails before you try and make it pass.
Failing for the Right Reason
It’s not enough to see a test fail. This is another common beginner mistake with TDD: run the test, see a red bar, jump into writing production code. However, is the test failing for the right reason or is the test failing because there’s an error in the test setup? For example, a
NullReferenceException may not be a valid failure. It may suggest that you need to enhance the test setup. Maybe there’s a missing collaborator. However, if you currently have a function returning null and your intention with this increment is to not return null, then maybe a
NullReferenceException is a perfectly valid failure.
This is why determining whether a test is failing for the right reason can be hard: it depends on the production code change you’re intending to make. This depends not only on knowledge of the code but also the experience of doing TDD to have an instinct for the type of change you’re intending to make with each cycle.
A tragically common occurrence is that we see the test fail, we write the production code, the test still fails. We’re pretty sure the production code is right. However, we were pretty sure the test was right, too. Eventually, we realize the test was wrong. What to do now? The obvious thing is to go fix the test. Woohoo! A green bar. Commit. Push.
Wait...did we just trust a passing test? After changing the test, we never actually saw the test fail. At this point, it’s vital to undo your production code changes and re-run the test. Either git stash them or comment them out. Make sure you run the modified test against the unmodified production code: that way you know the test can fail. If the test still passes, your test is still wrong.
TDD done well is a highly disciplined process. This can be hard for developers just learning it to appreciate. You’ll only internalize these steps once you’ve seen why they are vital (and not just read about it on the internets). Only by regularly practicing TDD will this discipline become second nature.
Published at DZone with permission of David Green , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.