Code coverage metrics and Functional Test Coverage
Code coverage metrics and Functional Test Coverage
Join the DZone community and get the full member experience.Join For Free
How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices.
There have been some articles and tweets about code coverage recently, and it seems that many developers are still laboring under a few misconceptions in this area.
Code coverage can be a very useful metric. However you need to know how, and when, to use it. The link between code coverage and test quality is tenuous at best - in short, high code coverage is, in itself, no guarantee of well tested code. And increasing code coverage for the sake of code coverage will not necessarily improve either the quality of your tests or the quality of your application. It is easy (and obviously a largely futile exercise) to achieve high code coverage metrics without actually testing anything at all.
Now don't go thinking I'm not a fan of test coverage. For the record, I am a huge fan of high test coverage, though I don't write tests explicitly with this aim (as I will discuss further on). As a metric, code coverage has its limitations, and should not be used for purposes for which it is poorly suited. Test coverage is excellent at indicated what code has not been exercised by your unit tests. Indeed, if high code coverage does not prove, in itself, that your code is well tested, low code coverage provides fairly conclusive evidence that your code is untested. An experienced developer will know how to use this information to complete her tests to cover important edge cases and boundary conditions.
But what of the broader picture? How do code coverage metrics help you deliver a useful, high quality product to your users?
Well tested applications tend to be more reliable, easier to understand, easier to maintain and in the end faster to develop. This seems a no-brainer, but it is also the practical experience of countless TDD practitioners, and the results of quite a few academic studies. And, in my experience, the single most effective way to achieve high test quality comes from using a combination of ATDD and TDD/BDD. Techniques such as Acceptance-Test Driven Development (ATDD) and Example-based specifications are an excellent way to to drive and track the development process. This process drills down and fans out into Test-Driven Development (TDD), often with a behavioural flavour to it (BDD) at a lower level. This holistic approach has the major advantage of giving you confidence in your code both on a functional (does it does what the client wants) and a technical (does it work) level.
So what of test coverage? For a product owner, or for someone from QA, the notion of 90% test coverage is abstract at best. It may be able to indicate that all of the classes in the com.acme.gizmo.widget package have been exercised during the unit (and possibly the integration). However what is more useful is to know how many requirements, or story cards, or features, have been demonstrably implemented and tested.
What I would call Functional test coverage is a little different. Functional Test Coverage should give an indication of what features are done, in that they satisfy the acceptance criteria, and what features are still in progress. This sort of information is much more accessible to product owners than the number of lines of code exercised. This is in the lines of Acceptance-Test Driven Development, and can be a very powerful communication tool. In ATDD, product owners express their requirements as stories (or features, or whatever). The form and content of automated functional tests should be ideally driven by the customer, though in practice QA or BA folks may play this role as well. It's a communication exercise.
Each story has a set of acceptance criteria, typically expressed as examples of how the feature would work in different scenarios. Developers or testers automate these acceptance criteria (for web applications, this could involve using Selenium or WebDriver tests, for example). These tests are then run automatically, for example whenever the code changes (ideally), or on a nightly basis (if the tests take a very long time to run). The reports generated by these test runs give the product owners a very clear idea of which features have been implemented, which work, and how many are still pending implementation. BDD tools such as easyb or cucumber are a great help implementing this sort of tests.
So where does that leave us with code coverage? In short, you really need both functional and technical test coverage metrics. However high code coverage should be the natural outcome of good testing practices, not a goal to be aimed for. For this reason, I am not a big fan of aiming for a certain percentage of code coverage. But, if I am working on a project using proper ATDD, BDD and TDD practices and the code coverage drops below say 90-95%, I will investigate, as it may be an indicator of an underlying problem or an area where good testing practices have not been followed.
If you would like to learn more about TDD, BDD and ATDD in practice, I will be running the next TDD,BDD and Testing Best Practices for Java Developers in Sydney on June 20-22. And for those in Europe and the UK, I will be running two online courses in the week of May 31: Fundamentals of Test-Driven Development in Java and Automated Web Testing with Selenium 2/Web Driver.
Opinions expressed by DZone contributors are their own.