{{ !articles[0].partner.isSponsoringArticle ? "Platinum" : "Portal" }} Partner

My use case for checked exceptions

Checked exceptions are an idiomatic Java feature that has been questioned by many in the last years: throws clauses specify the possible errors raised by a method, and the calling code is forced to deal with them at compile-time, by wrapping the call into a try/catch construct or by adding a throws clause too.

However, dynamic languages and also statically typed ones like C# do not force the immediate handling of exceptions.

The dangers of throws clauses

There are some technical issues with checked exceptions: the produce unreachable code sometimes, when the implementations do not throw the exceptions defined by their interfaces (Misko Hevery makes the example of the ByteArrayOutputStream).Other issues are methodological: checked exceptions produce an inflation of try/catch blocks, which proliferate and start to be ignored by programmers quickly. Moreover, there is no recovery from many exceptions (at least for the current thread of execution), so making every single piece of code specify how to handle the error seems an overkill.

An alternative to checked exceptions is to exercise the code with a comprehensive test suite, that contains tests for corner cases and error conditions. But you can't blame the hammer for smashing your fingers: every tool has use cases where it fits and cases where it doesn't. Except for the fact that tha Java API is full of checked exceptions, so you're forced to use an hammer if you want to read a file.

A note on distributed computing

It is widely believed that it is impossible to extend transparently the local computation paradigm (made of calls to functions and methods) to the realm of distributed computing. Remote Procedure Call and Remote Method Invocation are a useful abstractions, but they can't presente an interface identical to the one of local methods. The famous paper A note on distributed computing explains the underlying issues of distribution that can't be dealt with automatically:

  • latency in remote calls is several orders of magnitude higher than with local ones.
  • Memory access in low-level languages like C is problematic, since memory cannot be shared with remote processes. You can't pass a pointer to a server, although in the case of virtual machines Virtual Proxies more or less work.
  • Concurrency is everywhere, and there is no way to provide synchronization automatically with synchronized and common multithreading patterns, unless you adopt specific algorithms.
  • Partial failures are possible. Locally a failure usually results in the termination of the whole process; distributing the computation means not only failures are more likely due to the unreliable channels, but they also encompass only a part of the system, and the surviving parts should continue to behave correctly and deal with it (indeed distribution's job is often masking failures of individual components). You don't want your database master server to fail because it failed to update a slave.

Thus distributed computing cannot be made similar to local computing, unless we choose one of two options:

  • make the local paradigm similar to the distributed one; this solution would needlessly introduce difficulties like checking failures that will never happen locally during your lifetime. Catching RemoteExceptions on all your method calls is not funny.
  • make the distributed paradigm similar to the local one, overseeing frequent failure modes and the problems specified above. What happens during these cases would be undefined.

Checked exceptions come in

Some objects hide network communication, by containing references to TCP sockets and streams, by sending UDP packets, or by making HTTP calls. Implementations like RMI do this automatically, but the substance doesn't change.

We need to bridge these objects with local computation. For example, if we are implementing a distributed search algorithm we want to simply cut away the node that do not respond, showing the user only the other results. We try to mask remote failures by guaranteeing basic functionalities.

I found this approach helpful in connecting local and remote objects:

  • objects that work with the network throw checked exceptions. These exceptions are part of the interface just as the return type.
  • Bridge objects deal with partial failure, synchronization, and security; they attempt to mask failures or performance issues and provide a basic correct behavior.
  • Objects that work over bridges do not need to care about the network, and they can be tested in isolation.

Checked exceptions are a way to ensure we're not mixing up the two models: you are forced to deal with potential failures deriving from the network at compile time. It's the exception job to propagate into the signatures of the methods until we handle it; it's impossible to call a method using the network without notice.

I saw a similar idea for "formal" verification at DDD-Day last year by Giacomo Tesio, where domain models used checked exceptions to express failures. The idea is this kind of failures (distributed ones, or relevant errors in the business domain) are too important to be left only to testing coverage. The preference is to sacrifice the code's flexibility to ensure architecture is not screwed up by objects calling the server inside a local computation (like a comparison for a sorting algorithm or a result filtering).

Code sample

In my example I implemented a search targeting multiple remote servers. The interface of the objects dealing with remote computation declares a checked exception in its clause:

public interface RemoteDocumentsSearcher {

    QueryResult query(Query query)
            throws NoResponseException;


Internally, the Java socket exceptions are wrapped into new ones, representing the failure modes the application is interested with. The Only mock types you own may apply also to exceptions: I prefer to establish my own hierarchy, made of classes where I can add methods and fields.

try {
            return reader.readLine();
        } catch (SocketException e) {
            throw new ConnectionClosedException("Connection closed while reading.");
        } catch (IOException e) {
            throw new ConnectionClosedException("Connection closed while reading.");

In the bridge object, the error handling consists of simply excluding the current server from the results, but it could be anything in principle; for example, in the case of file fetching from multiple sources it consists of setting up a new connection to one of the surviving servers. The important thing is to produce a behavior that is composable with the rest of the process instead of bubbling up as an exception, or a RuntimeException:

        QueryResult result = new QueryResult();
        if (valid()) {
            try {
                QueryResult additionalResult = searcher.query(this);
                result = result.merge(additionalResult);
            } catch (NoResponseException e) {
                // exclude not responding neighbors from the search
        return result;

The bridge object implements this interface. In this case it is very similar to the original one without exceptions, but it doesn't necessarily be: the behavior of an object masking failures can be less powerful than the original one.

public interface DocumentsSearcher {
        public abstract QueryResult searchDocument(Query query);

{{ tag }}, {{tag}},

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}
{{ parent.authors[0].realName || parent.author}}

{{ parent.authors[0].tagline || parent.tagline }}

{{ parent.views }} ViewsClicks