Actually, no. Unit testing does actually refer to a very specific type of test - one whose sole purpose in life is to:
isolate an individual unit and validate its correctness.
In other words, a unit test should only test an individual unit (a class or a method). If executing a test crosses that class's boundaries to say, access the database or call a method in another class, you're no longer doing unit testing, you're doing integration testing.
Okay, that's really nice and now I can feel like a big, important software engineer for knowing the true definition of unit test. But, why do I care?
That's a good question. The reality is that most "unit tests" - the ones on my project included - do tend to go a little beyond just an individual class. We've got a J2EE application with business objects that are hooked up to the database via hibernate DAOs. And, of course, our classes have relations to other classes. So, when you call one of our class's methods, those methods quite frequently access the database and/or execute calls to other classes. Sometimes we even purposefully test scenarios that span several classes. For example, we're developing rules to determine how to process health insurance claims. These rules exist within a complete rules framework. So, we want to test not only that the rule is right, but that it operates as expected within the framework it will be used within.
Okay... so we're clearly getting beyond the strict definition of unit tests here. But again, I'll ask, who cares?
Let's stick with our rules example. The rules are, for the most, part very simple. "If A is true, make sure B is also true", "If a claim is for Dental, make sure the patient has dental coverage." Where the real complexity comes in is that there are over 700 of these rules, they all have to be run in a very specific order, and only under certain conditions, yadda yadda. You get the idea.
And so, if we heed the saying "a good test is one that finds a defect", then where are we going to get the most bang for our buck? Testing very simple rules in isolation? Or testing this extremely complex interaction of rules within the framework. Obviously the latter is exponentially more likely to turn up issues and thus make for the better test!
Okay, great - so I go off and run our unit/integration tests and what should happen but I get 203 failures(!@). It's a known issue but nobody has time to track down the problems. So, now not only is this a problem because we have no way of knowing if new code breaks the existing rules (if the test fails - what does that mean? Is it something we broke or was it already broken?) but for similar reasons we have no way to validate if the new rules we're writing work. And nobody really has any idea where in those 700+ classes the problems fall, so we don't know how to fix it.
Which of course is the age old problem with software integration and the very reason, actually, that it was determined that we need to do things like unit tests to test individual modules separately in order to validate that they work on their own before we try to put them all together.
The reality is that we need both. We need unit tests to ensure each unit works in isolation, and then we need higher level functional or integration tests to ensure the overall system works as expected. Do we really need our unit tests to be 100% pure? I think probably not. Just how pure do they need to be? Well, as with most things, my answer will be to try it out iteratively. Obviously, the current solution ain't working, so let's try to isolate them a little further and see if we can find those 203 problems. If it's still impossible, then make 'em a little more pure. Rinse, lather, repeat.
Stay tuned and I'll cover some of the issues that come up around "purifying" our unit tests - such as how to prevent our tests from accessing the database and what to do about testing private methods.
reposted from: The Hacker Chick Blog