Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Handling Noisy Unit Test Suites

DZone's Guide to

Handling Noisy Unit Test Suites

Learn automation strategies to clean up your test suite of unit tests that fail, but have never been fixed for one reason or another.

· DevOps Zone
Free Resource

Download the blueprint that can take a company of any maturity level all the way up to enterprise-scale continuous delivery using a combination of Automic Release Automation, Automic’s 20+ years of business automation experience, and the proven tools and practices the company is already leveraging.

Of the many reasons software developers complain about unit testing, dealing with noisy test suites is one of the biggest. And the longer a piece of software has been around, the noisier it gets. To clarify, by "noise" I mean tests that constantly fail, but you know (think) it’s ok anyway, so you just let them be. Or tests that sometimes fail and sometimes work, but no one has ever bothered to figure out or fix them. And then there are tests that are legitimately failing because the code has changed and the test needs to be updated. All of this noise is just screaming out for our attention, but the catch 22 is that the more noise there is, the less likely we are to do anything meaningful about it.

But guess what? Somewhere in that noise of “failed but OK” tests are some real problems that you wish you knew about. Think of it like trying to use a spell checker. If you don’t keep up on it, you’ll get all kinds of things you don’t care about, like special industry words, names, etc., that aren’t real spelling problems. But somewhere hiding in that mess are the embarrassing mistakes you actually made – silly misspelled words that you want out of there. And of course, there are tons of errant spelling errors across the globe – but unlike with your software, there's not a lot of inherent risk there, just a little embarrassment. 

And yet, unit test suites are generally in that same state. Lots of noisy results that we get used to seeing and ignoring, unfortunately hide real results that we need to know and understand. In many organizations, to solve this someone schedules a sprint to clean up the test suite every now and then, from a couple of months apart to even a couple of years. A large amount of time is spent getting the suite as clean as humanly possible, but inevitably the problem comes right back – and more quickly than you’d expect. This creates a negative feedback loop – no one wants to clean the tests because they think they’ll just be noisy again the next time.

The answer is to take a more functional approach – one that removes tedious, useless cleanup sprints and avoids noisy test suites from the beginning.

Minimizing Noisy Test Suites

To do so, it's important to understand what it means when a unit test fails. It boils down to three reasons, with simple fixes:

  1. The code is broken. So, fix the code. (This is ideally what clean test suites are telling you.)
  2. The code was changed properly and now the test is broken. So, fix the test to match the new code. (If your code is changing, you can expect that this is happening. A strong reason to work on tests when you’re working on the code.)
  3. The test is wrong and the code is fine. So, fix the test. (Or maybe remove it. But the key is – don't ignore the test.)

Now. You might be thinking – what if a ton of my test cases fit into that third category? How is this any help? So let's break that down.

The reasons for the noise usually come down to a few basic problems: bad tests, fragile tests, or poor assertions. Bad tests are tests that don’t do their job properly. Either they’re testing more than they should, or they’re hanging on data that is inconsistent or subject to change based on external conditions.

To minimize the noise, make sure that for each test that’s giving you problems (or better yet all your tests), you have a good answer to these two simple questions:

  1. What does it mean if the test passes?
  2. What does it mean if the test fails?

If for any test, you don’t have a reasonable answer to both of these questions, it needs improvement.

Improving Fragile Tests

Fragile tests are those that are easy to break. Again, this is often a symptom of lazy assertions – simply checking things because they can be checked doesn’t mean they should be checked. Each assertion should have real meaning pertaining to the requirement that the code being tested fulfills. Common culprits include date/time sensitive assertions, OS dependencies, filename/path dependencies, 3rdparty software installations, partner APIs, etc. Make sure you’re only asserting what you minimally need to in order to have a good test, and make sure that everything you’ll need for the test is part of your source control and build system.

Other bad assertions are those that are either constantly in a failed state, but you don’t mind releasing anyway ("Oh, thooose are ok, don’t worry about it"), or those that are in a constantly changing state ("It was fine before, and yesterday it was failing, but today it's fine!!"). If the code is in flux, it might be ok to have constantly-changing results for a short time, but in the long term, it should be unacceptable. You need to understand why the test outcome is changing all the time, or certainly why you think it’s ok to fail and still release. Doing peer review on your unit tests, including the assertions, will go a long way to fixing this problem permanently. (An extra benefit of peer review? It’s much easier to survive if you’re in a compliance environment where tests are part of mandated oversight.)

Assessing broken tests is truly a great place to do most of your cleanup. I’d challenge you to look hard at tests that have been failing for months or even years. Ask yourself if they’re really adding value. Remember, you’re ignoring the results anyway, so honestly what good are they? Removing tests you ignore will free you to focus on tests that matter, and actually improve your overall quality. 

And so it becomes fairly simple (although it might take an initial investment of time). To clean up, simply observe the following best practices:

  • Run tests regularly so they don’t get out of date – work on them with the code.
  • Remove tests that always fail, or fix them.
  • Remove tests that constantly flip state (pass/fail) or tighten them up.
  • Peer review unit tests.

And of course, don’t forget to use automation to do the tedious work so that the time you do spend on writing tests is more productive, allowing you to create tests that are less noisy.

Using Test Automation

Taking advantage of automated software testing helps make unit testing tasks less tedious. If you can let automation do the simple tedious parts (that computers are good at), it frees you up to do the things that require actual human intelligence (which you are good at). For example, let automation create the first working pass of your xUnit test cases (simple code that becomes very tedious to do). If you let a tool generate your getter/setter test methods automatically, you can save tons of time that you can use for other, more interesting things.

When we get more sophisticated with test automation, tools can help even further, doing some of the trickier parts of unit testing, such as creating and configuring stubs and mocks. The more you take advantage of automation, the less time unit testing will take — and it will be a lot less boring as well. If you’re using Java, take a look at our new Unit Test Assistant, integrated into Parasoft Jtest. It does all of these things, and a lot more, making unit testing not just easier, but way more enjoyable.

Download the ‘Practical Blueprint to Continuous Delivery’ to learn how Automic Release Automation can help you begin or continue your company’s digital transformation.

Topics:
devops ,automated testing ,unit tests ,unit testing

Published at DZone with permission of Arthur Hicken, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}