I Am Not Thinking About Unit Tests ... Right Now

DZone 's Guide to

I Am Not Thinking About Unit Tests ... Right Now

A long-term advocate of unit test coverage makes a stunning confession. Find out why there is a change of heart and if this is a long-term strategy.

· Java Zone ·
Free Resource

For quite some time I have been an advocate for including high levels of unit test coverage for code that will be deployed for production usage.  Regardless of the solution, if there is a way to programmatically test logic written by a developer, tests should be written to validate all the possible scenarios and situations.

I have even written articles on DZone to further drive home my point:

In my most recent unit test article "Unit Test Insanity", however, I begin taking a new stance on unit tests. In fact, right now I am not really thinking much about unit tests and providing coverage to validate all the possible flows within my program code.

My primary concerns are noted below.

Major Refactoring Often Breaks Unit Tests

The great thing about having a high percentage of code coverage, is that changes in the business rules often lead to unit tests which are no longer applicable or valid.  As a result, far more time is required to implement the changes - because unit tests need to be refactored or redesigned as well.

Years ago, I was placed onto a small team of four developers and a dev lead.  Our goal was to quickly rewrite an accounting system for a state agency.

Each of us took on a segment of the accounting flow.  As a result, the code I was writing was dependent on another developer.  My code would feed one of the other team member's work.  Using unit tests and mock objects, we were able to simulate the data being supplied to our programs in order to validate the logic was working properly.  From there, unit test coverage was added to make sure everything was working properly.  After all, this was an accounting system manipulating currency.

On more than one occasion, we realized something in the logic needed to change.  When this happened there was a change in the data stream, which caused the prior assumptions to no longer be valid.  As you might expect, all of the unit tests were invalid as well.  Every time we encountered this scenario, extra time was required to refactor and correct the unit tests so that they would function properly again.

When in the prototyping stage, consider keeping unit test coverage at a minimal level - with a plan to round out the unit test coverage when the design has been validated with the business owner.

Testing More than the System Under Test (SUT)

When reviewing pull requests (PRs) from other developers, I often see issues where the unit tests for a given system are performing tests for another system.

In a fictional example, consider a service that performs a basic mathematical operation.  Parameters provided to the service allow the functional code to determine a result.  However, before returning the result, there is a call to another service - which impacts the resulting payload, but not the information related to the calculation.

Instead of mocking the object in the test, I have seen where time was taken to also test the secondary service - all the while knowing there is a unit test for the other service.

Testing more than the SUT falls into line with what I noted in my "Unit Test Insanity" article and is something that should be avoided.

Introducing 1=1 Tests

When the unit test line starts to run up against the integration test line, it is easy to introduce what I call "1=1" (one equals one) unit tests.

A 1=1 unit test basically does nothing, but return the exact value provided in the test.  I have seen this with DAO classes, which depend on data from the database.  Basically, the test news up the appropriate object, then uses a  .when()  method (as an example) to return that object when the DAO method is called.

While this process works in service level, following this approach at the DAO (or similar) level is basically creating a 1=1 situation that is always true and doing nothing more than requiring extra time for the unit tests to complete.

Developers should avoid introducing 1=1 tests that provide no value.

Developer Lack of Focus With Data Setup

My biggest criticism with unit test creation is a lack of focus on the developer's part when populating attributes for testing.

Consider an example where a  User  object is being tested.  Unless the attributes are required or have something to do with the unit test, they should always be left unassigned.  In other words, if the  firstName  and  lastName  attributes can be null, there is no need to set them to something like  firstName="John"  and  lastName="Doe" .  By doing this, you are only taking the focus away from the actual test - especially when it takes time to create/set/process those values.

When the attributes are required, I always use something like  "firstName=firstName" , because when debugging there is no confusion between the provided first name and last name.  

How can this be confusing?  Just think about names like Bill Daniel, which could also be Daniel Bill.  Or how about names that are not common in other parts of the world?

Don't set fields which are not required ... otherwise make sure the values are clearly understandable.

Using Data That Is Not Valid

My last criticism is to always use data which is valid in the unit test setup.

Future maintainers of the code may not have any familiarity with the system under test.  Often times a unit test is an easy way for someone to quickly get up to speed and validate their knowledge of the functionality being reviewed.

Building upon my last thought, in cases where the attributes are required, please make sure the values are required as well.  Of course, the use of an object or an Enum can make this an easier rule to follow, but there are situations which such safeguards are simply not possible.

If you are writing unit tests, always remember to have your Production Support hat on.  The support team will certainly appreciate the developer that takes extra time to make things clearer.


The major caveat to my statement, is that I am solely focused on delivering new features and functionality within a RESTful API which are still pending either client implementation or service-based integrations.

I have found that past time spent writing ample unit tests has caused delays in getting fixes in place, because the original design of the code had to be altered.  As I noted, when this happens, there is a high probability that the unit tests will fail and require updates before anything can be merged into the primary branch of the repository.

While I am not thinking about unit tests right now, I have tickets in the project backlog which will apply and implement the necessary test coverage, once the RESTful API is at a state which is ready for unit test validation.

I still see the value in unit tests.  That has not changed.  What has changed for me is knowing when the best time is to introduce the additional code and how those unit tests should be created.  When I do introduce a unit test, it needs to be focused and concise - with just as much thought being applied as with the actual system that is under test.

Have a really great day!

focus, java, mock data, refactoring, unit test

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}