Over a million developers have joined DZone.

Testing Techniques - Part 1 - Not Writing Tests

DZone's Guide to

Testing Techniques - Part 1 - Not Writing Tests

· Java Zone
Free Resource

Learn how to troubleshoot and diagnose some of the most common performance issues in Java today. Brought to you in partnership with AppDynamics.

There’s not much doubt about it, the way you test your code is a contentious issue. Different test techniques find favour with different developers for varying reasons including corporate culture, experience and general psychological outlook. For example, you may prefer writing classic unit tests that test an object’s behaviour in isolation by examining return values; you may favour classic stubs, or fake objects; or you may like using mock objects to mock roles, or even using mock objects as stubs. This and my next few blogs takes part of a very, very common design pattern and examines different approaches you could take in testing it.

The design pattern I’m using is shown in the UML diagram below, it’s something I’ve used before, mainly because it is so common. You may not like it - it is more ‘ask don’t tell’ rather than ‘tell don’t ask’ in its design, but it suits this simple demo.

In this example, the ubiquitous pattern above will be used to retrieve and validate an address from a database. The sample code, available from my GitHub repository 1, takes a simple Spring MVC webapp as its starting point and uses a small MySQL database to store the addresses for no other reason than I already have a server running locally on my laptop.

So far as testing goes, the blogs will concentrate upon testing the service layer component AddressService:
public class AddressService {

  private static final Logger logger = LoggerFactory.getLogger(AddressService.class);

  private AddressDao addressDao;

   * Given an id, retrieve an address. Apply phony business rules.
   * @param id
   *            The id of the address object.
  public Address findAddress(int id) {

    logger.info("In Address Service with id: " + id);
    Address address = addressDao.findAddress(id);


    logger.info("Leaving Address Service with id: " + id);
    return address;

  private void businessMethod(Address address) {

    logger.info("in business method");
    // Do some jiggery-pokery here....

  void setAddressDao(AddressDao addressDao) {
    this.addressDao = addressDao;


...as demonstrated by the code above, which you can see is very simple: it has a findAddress(...) method that takes as its input the id (or table primary key) for a single address. It calls a Data Access Object (DAO), and pretends to do some business processing before returning the Address object to the caller.

public class Address {

  private final int id;

  private final String street;

  private final String town;

  private final String country;

  private final String postCode;

  public Address(int id, String street, String town, String postCode, String country) {
    this.id = id;
    this.street = street;
    this.town = town;
    this.postCode = postCode;
    this.country = country;

  public int getId() {
    return id;

  public String getStreet() {

    return street;

  public String getTown() {

    return town;

  public String getCountry() {

    return country;

  public String getPostCode() {

    return postCode;

As I said above, I’m going to cover different strategies for testing this code, some of which I’ll guarantee you’ll hate. The first one, still widely used by many developers and organisation is...

Don’t Write Any Tests

Unbelievably, some people and organisations still do this. They write their code, deploy it to the web server and open a page. If the page opens then they ship the code, if it doesn’t then they fix the code, compile it redeploy it, reload the web browser and retest.

The most extreme example I've ever seen of this technique: changing the code, deploying to a server, running the code, spotting a bug and going around the loop again was a couple of years ago on a prestigious Government project. The sub-contractor had, I guess to save money, imported a load of cheap and very inexperienced programmers from 'off-shore' and didn't have enough experienced programmers to mentor them. The module in question was a simple Spring based Message Driven Bean that took messages from one queue, applied a little business logic and then pushed it into another queue: simples. The original author started out by writing a few tests, but then passed the code on to other inexperienced team members. When the code changed and a test broke, they simply switched off all the tests. Testing consisted of deploying the MDB to the EJB container (Weblogic), pushing a message into the front of the system and watching what came out of the other end and debugging the logs along the way. You may say that an end to end test like this isn't too bad, BUT to deploy the MDB and to run the test took just over an HOUR: in a working day, that's 8 code changes. Not exactly rapid development!

My job? To fix the process and the code. The solution? Write tests, run tests and refactor the code. The module went from having zero tests to about 40 unit tests and a few integration tests and it was improved and finally delivered. Done, done.

Most people will have their own opinions on this technique, and mine are: it produces unreliable code; it takes longer to write and ship code using this technique because you spend loads of time waiting for servers to start, WARs / EJBs to be deployed etc. and it’s generally used by more inexperienced programmers, or those who haven’t suffered by using this technique - and you do suffer. I can say that I've worked on projects where I'm writing tests whilst other developers aren't. The test team find very few bugs in my code, whilst those other developers are fixings loads of bugs and are going frantic trying to meet their deadlines. Am I a brilliant programmer or does writing tests pay dividends? From experience, if you use this technique, you will have lots of additional bugs to fix because you can’t easily and repeatably test the multitude of scenarios that accompany the story you’re developing. This is because it simply takes too long and you have to remember each scenario and then manually run them.

I do wonder whether or not the not writing tests technique is a hangover from the 1960's when computing time was expensive, and you had to write programs by hand on punched cards or paper tape and then check over visually using a 'truth table'. Once you were happy that you code worked, you then sent it to the machine room and ran your code 2. The fact that machine time was expensive meant that automated testing was out of the question. Although computers got faster, this obsolete paradigm continued on, degenerating into one where you missed out the diligent mental check and just ran the code and if it broke you fixed it. This degenerate paradigm was (is?) still taught in schools, colleges and books and was unchallenged until the last few years.

Is this why it can be quite hard to convince people to change their habits?

Another major problem with this technique is that a project can descend into a state of paralysis. As I said above, with this technique your bug count will be high and this gives a bad impression to project managers with the perception that the code stinks and enforces the idea that you don't change the code unless absolutely necessary as you might break something. Managers become hesitant about authorising code changes often having no faith in the developers and micro-managing them. Indeed the developers themselves become very hesitant about adding changes to code as breaking something will make them look bad. The changes they do make are as tiny and small as possible and without any refactoring. Over time this adds to the mess and the code degenerates even more becoming a bigger ball of mud.

Whilst I think that you should load and review a page to ensure that every thing's working, it should only be done at the end of the story, once you have a bundle of tests that tell you that your code is working okay.

I hope that I’m not being contentious when I sum this method up by saying that it sucks, though time will tell. You may also wonder why I included it, the reason is to point out that it sucks and offer some alternatives in my following blogs.

1 See: git://github.com/roghughe/captaindebug.git
2 I'm not old enough to remember computing in the 60s


From http://www.captaindebug.com/2011/11/testing-techniques-part-1-not-writing.html

Understand the needs and benefits around implementing the right monitoring solution for a growing containerized market. Brought to you in partnership with AppDynamics.


Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}