Ten Reasons for Full Test Coverage (Some of Which Might Be New to You)

DZone 's Guide to

Ten Reasons for Full Test Coverage (Some of Which Might Be New to You)

· ·
Free Resource

Yeah, I've got a bee in my bonnet about 100% test coverage. I know this is a contentious subject and I know the arguments against going for test coverage. And yes, coverage is not enough – a test that leads to increased coverage without any relevant assertions is not a test, but a shame. It's bogus (With one exception. See below).

Ok, we've got that out of the way. There are some more things to get out of the way, though:

    • This is a post about test coverage, not about (unit) testing per se. I think the software development community is beyond the question of the usefulness of testing by now.

    • I am a Java programmer, with some experience in Scala and Perl. I take my examples and mention tools from a Java/Scala context, but the concepts should apply to other languages, too.

    • There may be some very obvious reasons for test coverage that I skip, like finding bugs in an early stage of development. I’d rather like to point out a few somewhat less obvious advantages of the approach.

    • And, yes, here is a spoiler: 100% coverage are often not realistic, but hey, we can give it a try anyway...

So let’s get going. Here are my ten reasons to attempt full coverage. I hope at least some of them are new to you. Ten reasons is quite a lot to cover, so this is not going to be a short post. For the impatient, here is a management summary. Refer to the corresponding sections below for details.

    1. Complete tests provide a complete description of behaviour.
    2. Negative cases get your attention, so...
    3. … tests lead to defensive programming.
    4. There is more clarity about what got tested (well, everything, obviously).
    5. In the long run this is more efficient than partial coverage, not the other way around.
    6. There’ll probably be less dead code afterwards – unless you like to write tests for dead code.
    7. Test coverage results in better structure and cleaner code.
    8. There’ll be fewer anti patterns.
    9. Tests let you learn new things.
    10. Testing results in tools that can be used in production code.

Complete Description of Behaviour

Do you know all of your code? Do you know every part of your code three months after coding? Do you know your colleagues’ code? I guess you don't, at least not in a non-trivial project. Well, I don't, anyway.

Which ways are there to gain some understanding of your project's source code. Documentation? I have to see a useful, complete, up-to-date documentation of source code yet. Then how about looking at the unit tests. This might not always help you with the big picture but a test for some class with high coverage is likely to give a good behavioural description of this class. For this to work, it is helpful to have good test names. This is why I like the FlatSpec of the ScalaTest framework, even when testing Java code. A Scala FlatSpec test looks like this:

class ChemElemTest extends FlatSpec with Matchers {
  "Function checkSymbolFor()" should "return false when the element arg is null" in {
    checkSymbolFor(null, "hg") should be(false)
  it should "return false when the element arg is emtpy" in {
    checkSymbolFor("", "hg") should be(false)
  it should “throw an exception, when the database backend is gone” in {
    // ...

Having gone through a test like this (even if you don't know squat about Scala), you should have a good picture of what the unit under test is supposed to do and what it expects from it’s callers.

Of course this can be emulated in other programming languages – e.g. in Java/JUnit4 you could call your test

functionCheckSymbolForShouldDoThisOrThat() {…}

It’s much less readable but will probably do the job. The @DisplayName annotation in JUnit 5 goes the same way as the Scala FlatSpec and makes test names more readable – as long as the framework the test runs in cares to display the @DisplayName.

Personally I don’t hesitate to use unit testing to test units that are larger than a single class. Yes, I give a damn about running times – my cat sits in front of the washing machine and watches it’s progress, but I don’t need to do the same thing with my running tests. I can continue coding instead. If you use unit tests for larger units, those tests will provide you with some insight into more complex aggregations than single classes. It’s an option only, though, not a necessity and the pros and cons of this are not the subject of this post.

Negative Cases and Defensive Programming

Let’s put reasons two and three into one section, because one implies the other. It is a truism that we need to test not only the sweet spots of our code but also the negative and bad cases, and when we go for coverage, it is hard to avoid those cases. When I start writing tests for a class (not test first though – but this might be the topic for another post), I look at a method and, being lazy, just throw null arguments at it. Because in the heat of coding I did not pay attention to such details, this usually leads to some NullPointerExceptions. So I am forced to decide right then and there, if I want this to be the specified behaviour or if I want to do something about it within the method. In the latter and more common case, I have to handle those exceptions and since I already have a test, I can verify the method’s behaviour.

I’ll do the same with arguments with a certain range, undefined/forbidden values etc. Having done all this plus the positive test cases and all the while checking the coverage, I automatically arrive at a complete behavioural description of the method (see section above). Furthermore, if I didn’t do so before, I now have fortified my code against invalid parameters which makes for more resilient and safer code. You can think of the Defence in Depth approach to information technology security and transfer it to the layers in your program code. No method should trust it’s callers or it’s callees. Your test can help make sure that it doesn’t.

More Clarity about What Got Tested

When I look at the coverage of the unit tests for some piece of code and I see something like 60%, what does that tell me? If a class is an entity class with some logic and just a lot of accessor methods I could probably make those 60% just by calling the setters and getters while the application logic remains happily test free. There is no way I can know the details unless they are provided by the testing framework and I look into it more closely. There is no way a 60% threshold, used to validate and reject code in a deployment pipeline, can result in anything reliable. As a matter of fact, there are only two measurements that contain a definitive statement about test coverage: 0% and 100%.

So the argument I often hear, i.e. that testing accessor methods is a waste of time, is plain wrong – in the long run. Once I have tested them I never need to worry any more if the real tests have been implemented – any gap between the accessor coverage and 100% is code that needs to be tested.

By the way, I do not do accessor tests. I have a tool for that. There are several about, the one I use is OpenPojo. The very fact, that accessors are too plain stupid simple to be tested makes it possible to use automation here.

Complete Coverage is more Efficient in the Long Run

The previous argument leads to this one. When I have only partial coverage, I have to check again and again, if there is any relevant code that has not been tested yet. And this costs a lot of time. I have been there – in a project with a large, partially covered code base and several developers, every time I added or changed some code I had to look very closely if my code was tested, which really sucked. If the coverage is at ninety-something percent it just takes a quick look to check that the uncovered code really cannot efficiently be tested. If it gets much below that, it starts being a pain.

Less Dead Code

You probably know about YAGNI. So do I. Still, it’s always hard to throw away code. What I said previously might make you think that I enjoy writing tests. I don’t. I think it is a necessity and part of the effort to assure a high code quality. So, when I write tests and encounter one of those YAGNI code bits I wonder whether it is worth the effort and more often than not I decide it isn’t and throw it away. (I have a trick here to make the parting easier: I comment it out and some weeks later I stumble over the comment and throw it away). Which leads to less dead code. Which is a good thing, right? Without checking test coverage this would probably not happen, because I wouldn’t even notice the dead code.

Better Structure, Cleaner Code

Clean code is testable code. Testable code is clean code. One of the foremost factors that make code testable is breaking it up into smaller units. Some tools like Checkstyle use a metric for method length. It is not always possible or expedient to reduce a method to a fixed length as stipulated by static code analysis tools but reducing method length or class size makes the units more accessible to tests and usually renders the code more readable.

Another metric for clean code and another factor for testability is the reduction of branching. Why not try the command pattern instead of a lot of ifs and elses for once. A lot of small command classes are much better to test and easier to cover. Which takes us to the next point …

Fewer Anti Patterns

To get back to the previous example, I am not quite sure if I would call the use of switch or if statements an anti pattern. But there are other patterns which definitely qualify as such. Right now I work at a project which involves a JSF based web application. To get at the current request, response and web session objects JSF provides static access via a utility class (FacesContext). It is very easy to fall for the temptation to use this because it is quite convenient. It stops being so when you try to write a unit test for the class using this. Either you use a rather inconvenient mocking tool like PowerMock then, or you have to start thinking about how else to get the current request into your code. A much cleaner way is to use some type of dependency injection. When you are lucky you live in a framework which provides it. Otherwise you’ll have to conceive of something yourself. Whatever it is, this will make the code more versatile and improve testability. In the example mentioned above you could wrap the static FacesContext with a session scoped bean which is injected where needed. This bean is still a pain to test but it is very small and everything becomes much easier as a simple (Power-less) mock can be injected for the unit test.

Learn Something New

When writing white box tests you are often forced to access the program code in ways not intended by it’s interface. This will make it necessary to use techniques which you probably would avoid in production code. And sometimes you won’t succeed in accessing a piece of code at all. This is where there is a chance to learn something new. When I encounter a bit of code that is hard to cover and does not seem to warrant refactoring, I take this as a challenge – to learn something new at the very least and to cover it if possible. Did you know, for example, that switch statements over strings cannot (easily) be covered completely in Java? This is due to the way the byte code for these statements is generated (for an explanation, refer to https://stackoverflow.com/a/42680333). Ok, this is where I am content to have improved my knowledge and stop. There is no point in trying to get 100% coverage here, this would do nothing for code quality and be a waste of time. One might consider using an enum here instead of strings, but this might not always be a viable solution.

By the way, Java enums have a built in coverage gap, too. This one, however, is easy to solve: Just call value() or valueOf() on your enum (see https://stackoverflow.com/a/4548912). Of course, this does not test anything. You could do some asserts here but this would probably be overkill. OK, people with my mind set would probably assert, that this or that enum has 42 value()s. Anyway, simply calling one of those methods somewhere in the test code (with a helpful comment for your colleagues) is very little effort and makes the coverage complete (Why bother at all? See section Complete coverage is more efficient … above).

Similar issues hold for try-with-resources, but I won’t belabour this here, the point being that you learn something about how the compiler works (never a bad thing, right?), when trying to write tests for full coverage.

Java’s static initializer blocks, used in combination with branches or exceptions are a stinker. Obviously they can only be executed once during one VM run and I have at least to cater for the possibility that all tests are run in the same VM session.

There are some scenarios where static initializer blocks are useful, too (that’s why they are there, one would like to assume). So, what if I need, for example, to do some IO in a static initializer block and handle Exceptions? In a test scenario, either I can check the good case, or the bad case. Well, there is a way to solve this, which is to instantiate a new class loader for each test and have it load the class under different conditions. This is quite an amount of effort to test code that usually should be rather simple (in about 20 years of programming, I can remember one single time, when there was a good reason for a static initializer block becoming so large, that I had to transfer the code to two methods). I guess, it depends on the case, if it’s worth it. And there is one caveat here: There seems to be a problem with this at least for the eclipse EclEmma plug-in, which does not seem to record the coverage in this kind of scenario. But after all, this is about coverage, and not about red and green bars in a chart. So it will have to suffice that the code is covered.

While writing this, another, much simpler possibility comes to my mind: You could transfer the code from the static initializer block a static method that is called or even to a Utility class that is instantiated in the initializer block. Then all branches can be covered easily.

There are other examples one could mention here but I guess the point is made. Let’s have a look at reason number ten.

Create Useful Tools

There are some things that need to be done again and again in unit testing and laziness dictates, that you create tools for them. And there are some things, that are rather exotic and/or sophisticated and take some time to figure out, so, to preserve the knowledge embedded in this process, they might end up as tools, too.

Some of these tools make it into production code. When this is the case, they need to be tested, hopefully with high coverage.

So, testing with high coverage is not a waste of time anyway (I hope, we have arrived at this conclusion by now) and testing may result in code that may be useful outside tests.

The Upshot of it all ...

Those are my ten reasons for full coverage – full coverage not categorically being 100% coverage but whatever you can reasonably make it.

I did not say anything about line vs. branch coverage and that’s because the difference is not important to the message I wanted to convey. The message being, that beside often being tedious, testing and going for test coverage can be fun, is often instructive and – no surprises here – improves the quality of your code in more than one way. If you have made it to this point in the post, well, thank you for bearing with me for so long :-)

code quality ,java ,scala ,test coverage ,unit testing

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}