Over a million developers have joined DZone.

Does Defensive Programming Deserve Such a Bad Name?

DZone's Guide to

Does Defensive Programming Deserve Such a Bad Name?

· Java Zone
Free Resource

Download Microservices for Java Developers: A hands-on introduction to frameworks and containers. Brought to you in partnership with Red Hat.

The other day I went to an hour's talk on erlang, merely as an observer; I know nothing about erlang except that it does sound interesting and that the syntax is... well... unusual. The talk was given to some Java programmers who had recently learnt erlang and was a fair critic about their first erlang project, which they were just completing. The presenter said that these programmers needed to stop thinking like Java programmers and start thinking like erlang programmers1 and in particular to stop programming defensively and let processes fail fast and fix the problem.

Now, apparently, this is good practice in erlang because one of the features of erlang, and please correct me if I'm wrong, is that work is split into supervisors and processes. Supervisors supervise processes, creating them, destroying them and restarting them if required. The idea of failing fast is nothing new and is defined as the technique to use when your code comes across an illegal input. When this happens your code just falls over and aborts the point being that you fix the supplier of that input rather than your code. The sub-text of what the presenter said is that Java and defensive programming is bad and fail-fast if good, which is something that really needs closer investigation.

The first thing to do is to define Defensive Programming and the first definition I came across was in what is now possibly a legendary book: Writing Solid Code by Steve Maguire published by Microsoft Press. I read this book many years ago when I was a C programmer, which was then the defacto language of choice. In the book Steve demonstrates the use of an _Assert macro:

/* Borrowed from Complete Code by Steve Maguire */
#ifdef DEBUG

    void _Assert(char *,unsigned)     /* prototype */
    #define ASSERT(f)        \
         if(f)                          \
             { }                        \
    #define ASSERT(f)

// ...and later on..

void _Assert(char *strFile,unsigned uLine) {

    fprintf(stderr, "\nAssertion failed: %s, line %u\n",strFile,uLine);

/////// ...and then in your code

void my_func(int a).  {

    ASSERT(a != 0);

    // do something...

…as his definition of defensive programming. The idea here is that we're defining a C macro that when DEBUG is turned on, my_func(…) will test it's input using the ASSERT(f) and that will call the _Assert(…) function if the condition fails. Hence, when in DEBUG mode, in this sample my_func(int a) has the ability to abort execution if arg a is zero. When DEBUG is switched off, checked aren't carried out, but the code is leaner and quicker; something which was probably more of a consideration back in 1993.

Looking at this definition, several things come to mind. Firstly, this book was published in 1993, so is this still valid? It wouldn't be a good idea to kill Tomcat by using a System.exit(-1) if one of your users typed in the wrong input! Secondly, Java being more recent is also more sophisticated, has exceptions and exception handlers, so instead go aborting the program we'd throw an exception that would, for example, display an error page highlighting bad inputs.

The main point that comes to mind, however, is that this definition of defensive programming sounds a lot like fail-fast to me, in fact it's identical.

This isn't the first time that I've heard programmers complain about defensive programming, so, why has it got such a bad reputation? Why did the elrang talk presenter denigrate it so much? My guess is that there's good use of defensive programming and bad use of defensive programming. Let me explain with some code…

In this scenario I'm writing a Body Mass Index (BMI) calculator for a program that tells users whether or not they're over weight. A BMI value between 18.5 to 25 is apparently okay whilst anything over 25 ranges from overweight to severely obese with lots of life limiting issues. The BMI calculation uses the following simple formula:

BMI = weight (kg) / (height(m)2)

The reason I chosen this formula is that is presents the possibility of a divide by zero error, which the code I write must defend against.

public class BodyMassIndex {

  * Calculate the BMI using Weight(kg) / height(m)2
  * @return Returns the BMI to four significant figures eg nn.nn
  public Double calculate(Double weight, Double height) {

  Validate.notNull(weight, "Your weight cannot be null");
  Validate.notNull(height, "Your height cannot be null");

  Validate.validState(weight.doubleValue() > 0, "Your weight cannot be zero");
  Validate.validState(height.doubleValue() > 0, "Your height cannot be zero");

  Double tmp = weight / (height * height);

  BigDecimal result = new BigDecimal(tmp);
  MathContext mathContext = new MathContext(4);
  result = result.round(mathContext);

  return result.doubleValue();

The code above uses the idea put forward in Steve's 1993 definition of defensive programming. When the program calls calculate(Double weight,Double height) four validations are carried out, testing the state of each input argument and throwing an appropriate exception on failure. As this is the 21 st century I didn't have to define my own validation routines, I simply used those provided by the Apache commons-lang3 library and imported:

import org.apache.commons.lang3.Validate;

…and added:


…to my pom.xml.

The Apache commons lang library contains the Validate class, which provides some basic validation. If you need more sophisticated validation algorithms take a look a the Apache commons validator library.
Once validated the calculate(…) method calculates the BMI and rounds it to four significant figures (e.g. nn.nn). It then returns the result to the caller. Using Validate allows me to write lots of JUnit tests to ensure that everything goes well in case of trouble and to differentiate between each type of failure:

public class BodyMassIndexTest {

  private BodyMassIndex instance;

  public void setUp() throws Exception {
  instance = new BodyMassIndex();

  public void test_valid_inputs() {

  final Double expectedResult = 26.23;

  Double result = instance.calculate(85.0, 1.8);
  assertEquals(expectedResult, result);

  @Test(expected = NullPointerException.class)
  public void test_null_weight_input() {

  instance.calculate(null, 1.8);

  @Test(expected = NullPointerException.class)
  public void test_null_height_input() {

  instance.calculate(75.0, null);

  @Test(expected = IllegalStateException.class)
  public void test_zero_height_input() {

  instance.calculate(75.0, 0.0);

  @Test(expected = IllegalStateException.class)
  public void test_zero_weight_input() {

  instance.calculate(0.0, 1.8);

One of the "advantages" of the C code is that you can turn the ASSERT(f) of and on using a compiler switch. If you need to do this in Java then take a look at using Java's assert keyword.
The above sample is what I'd hope we'd agree is the well written sample - the good code. So, what's needed now is the badly written sample. The main criticism of defensive programming is that it can hide errors and that's very true, if you write bad code.

public class BodyMassIndex {

  * Calculate the BMI using Weight(kg) / height(m)2
  * @return Returns the BMI to four significant figures eg nn.nn
  public Double calculate(Double weight, Double height) {

  Double result = null;

  if ((weight != null) && (height != null) && (weight > 0.0) && (height > 0.0)) {

  Double tmp = weight / (height * height);

  BigDecimal bd = new BigDecimal(tmp);
  MathContext mathContext = new MathContext(4);
  bd = bd.round(mathContext);
  result = bd.doubleValue();

  return result;

The code above also checks against both null and zero arguments, but it does so using the following if statement:

  if ((weight != null) && (height != null) && (weight > 0.0) && (height > 0.0)) {

Looking on the bright side, the code won't crash if the inputs are incorrect, but it won't tell the caller what's gone wrong, it'll simply hide the error and return null. Although it hasn't crashed, you have to ask what's the caller going to do will a null return value? It'll either have to ignore the problem or process the error there and using something like this:

  public void test_zero_weight_input_forces_additional_checks() {

  Double result = instance.calculate(0.0, 1.8);
  if (result == null) {
  System.out.println("Incorrect input to BMI calculation");
  // process the error
  } else {
  System.out.println("Your BMI is: " + result.doubleValue());

If this 'bad' coding technique is used throughout a code base, then there will be a large amount of extra code required to check each return value.

It's a good idea to NEVER return null values from a method. For more information take a look at this set of blogs.
In conclusion, I really think that there's no difference between defensive programming and fail-fast programming as they're really the same thing. Isn't there, as always, just good coding and bad coding? I'll let you decide.

This code sample is available on Github.

1There's always a paradigm shift in thinking when learning a new language. There will be the point where the penny drops and you "get it", whatever it is.

Download Modern Java EE Design Patterns: Building Scalable Architecture for Sustainable Enterprise Development.  Brought to you in partnership with Red Hat


Published at DZone with permission of Roger Hughes, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.


Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.


{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}