Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

When to Ignore Your Code Coverage

DZone 's Guide to

When to Ignore Your Code Coverage

Having 100% code coverage doesn't make your code invulnerable, or even necessarily robust.

· Agile Zone ·
Free Resource

As a test engineer, I sometimes get confused looks when I am not aggressively pushing to improve the code coverage of tests.

Don’t get me wrong, code coverage is a useful tool. It forces people to review code paths that they might have ignored or forgotten. It also provides an easily-digestible metric for comparing the state of an application’s test suite to other applications.

However, I sometimes see an overreliance on code coverage. Test quality and code coverage are two different things. Code coverage, when over-used as a measurement of test quality, can wind up harming test quality rather than helping it. Here are my two key indicators that our code coverage tools may be damaging our testing strategy.

Too Much Time Is Spent on Cases that Don’t Matter

One good qualitative measure of test quality is how well high-impact workflows or integrations are tested. If you’re an online shop, it doesn’t really matter that the layout is correct if customers can’t do an integral thing like logging in our purchasing a product. As a consequence, the workflow for purchasing should be well tested to ensure the critical path works well.

Code coverage requirements that are too high force people to spend time covering all workflows but don’t favor critical workflows over low impact workflows.

Testing is a zero-sum game. Time spent testing one thing is typically time not spent testing another. If developers are spending time testing edge cases just to satisfy code coverage, they are missing out on time that could be used to test more important cases.

The types of cases I most often see taking up time for little benefit are cases testing code that should never be hit and code that intentionally throws errors. Sometimes these cases are good to test, but it should involve some critical thinking.

If there is code throwing an error or failing out in a situation the program should never be in, then chances are, something else has already gone terribly wrong. Another test case somewhere should have caught that in an ideal world. I’d rather have that test case, than the case verifying the error is an error.

Verifying that an error is thrown when terrible things are already happening is a bit like verifying that black smoke billows out of the car after the engine has caught fire. Sure, maybe black smoke is better than noxious green smoke, but it would have been better to stop the car from catching fire in the first place.

One caveat: this doesn’t mean all testing edge cases for code coverage are a waste. Edge cases and invalid inputs are important to test. The point is to think critically about which of those last few percentage points of code coverage are really worth testing.

Testing Is Finished When Code Coverage Is High

Code coverage gives a nice number. It feels good to have 100% code coverage. Sometimes, though, having a high percentage of code coverage can create the illusion of testing quality. Showing a manager or a Product Owner that there is “100% coverage” can get beaming smiles and praise, but they’ll just be confused when issues continue to arise.

100% code coverage does not mean there is 100% test coverage. You can prove this with even simple code. Here is an example of some broken code with 100% test coverage, good tests, and serious issues.

# Divide should return the numerator divided by
# the denominator except when there is a 0
# denominator. In that case it should return the
# value of the limit as denominator approaches 0
def divide(numerator,denominator):
    if denominator==0 :
        return float("inf")
    elif numerator>denominator:
        return numerator/denominator
    else:
        return denominator/numerator

#Case 1
assert divide(1,0)==float("inf")
#Case 2
assert divide(27,6)==4.5
#Case 3
assert divide(5,5)==1


You could see particularly how a black box tester might write these tests and think they were OK. The boundary condition of the greater than is checked, edge case of dividing by 0 is checked, not dividing evenly is checked; code coverage is 100%.

Clearly though, even in this simple case, 100% code coverage doesn’t mean there’s 100% test coverage. We actually need at least two more cases. Case 4 is due to a code error, and possibly case 5 due to vague requirements:

#Case 4
assert divide(1,2)==.5
#Case 5
assert divide(-1,0)==float("-inf")


Just because one input in every path returns the expected result doesn’t bean every input in every path returns the expected result. If tests only get a high coverage, and don’t do other interesting tests, they will surely miss issues. 100% code coverage cannot be interpreted to mean that testing is complete.

This is particularly important with integration points and UI tests. Reliance on third-party code makes your code even more unpredictable. Even without code changes, the software’s behavior might change over time due to those integrations, so surely just validating that all your code paths work in one case can’t be sufficient testing.

What to Do Instead

So, what can you do in conjunction with code coverage to make use of it as a valuable tool, but not become over reliant?

First, look at the code coverage, but give extra weight to high impact or risky areas of code. In integration points, complex code, or highly shared code, put some effort into reviewing what the code coverage means. All the important code paths should be covered, but that shouldn’t be the only coverage.

Keep in mind the example above where we varied the input within a previously covered test path. The more complex the code, the more likely that input variance will uncover issues.

If you need to quantitative metrics, you can improve them by combining code coverage with other metrics. Reviewing the number of tests in each area, issues found, and false positives, among other things, will give you a better idea of how useful your test suite is than relying on code coverage alone.

Hopefully this helps you avoid some common pitfalls of using code coverage. While code coverage is a useful tool, using it in isolation can give a misleading view of your test coverage.

Topics:
code coverage ,quality assuarance ,test coverage ,code analysis tools ,quality management ,agile

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}