Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Keep Test Coverage Where it is Needed

DZone 's Guide to

Keep Test Coverage Where it is Needed

100% coverage on everything would be nice, but is it practical, or even necessary?

· Performance Zone ·
Free Resource

I'm not a believer in having standards for test coverage. I know teams that require 60%, 70%, or 80% test coverage for their code. I don't like having standards like this because different code has different requirements for testing.

Straightforward code like getters and setters don't really need to be tested. However, more complex code that encapsulates business rules should be tested. When developers do "test after" software development by writing their tests after they write their code, they typically try to find the easiest code to test in order to meet their code coverage standards, but oftentimes this is not the code that we really need to have covered.

If you do test-first development the way I recommend it then you're never putting any code into the system without it making a failing test pass. By that very definition, all the code that you add to a system is covered with unit tests, and indeed, when you do test-first development correctly you should achieve 100% test coverage.

That is in an ideal world and there are a few things they can cause us to get dinged on test coverage such as when we make calls to external libraries. These little exceptions aside, we should find that if we are doing test-first development well that we have 100% or nearly 100% test coverage.

People have argued with me that you don't need to have unit tests for simple code like accessor methods and I agree. I don't write unit tests for accessor methods because I'm trying to lock down their behavior. I write unit tests for accessor methods because I think of my unit tests as a form of specifying the behavior in my system. I would mention that I would have accessor methods in my specification, if I was writing one, and since I think of my unit tests as living specifications I also want to "mention" them in my unit tests as well.

Thinking about tests as a form of specifying and articulating the behavior of a system is a far more detailed and precise way of specifying behavior then a written specification, which includes all the ambiguities of spoken language.

Because I see my unit tests as specifications, I strive for one hundred percent test coverage but I understand if other people feel that that's unnecessary. I'm an idealist. I do find that thinking about test-first development as a form of specifying behavior in code really helps me understand how to write the right kind of behavioral tests for the system that I'm building.

Of course, another great benefit of doing test-first development is that you achieve a high degree of test coverage for code and the code that's produced is highly testable. These things tend to be good for software.

If you want to get really precise in your using tools like Sonar then you can look at test coverage based upon codes for climatic complexity. For example, we can say that we want 100% test coverage for all code who sick climatic complexity is above three. This approach may be a more accurate way to measure test coverage.

Ultimately, it's the most complex code that we write that has the highest potential for defects and so that's the code we want to have covered by unit tests because that's the code that is more likely to harbor defects.

Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called "Seven Strategies for Seven Strategies for Agile Infrastructure."

Topics:
performance ,testing ,test coverage ,unit test ,tdd

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}