Over a million developers have joined DZone.

Fast, Effective, Double-Checked Locking in Android and Java Apps

· Performance Zone

Evolve your approach to Application Performance Monitoring by adopting five best practices that are outlined and explored in this e-book, brought to you in partnership with BMC.

[Editor's note: This post was originally written by Jason Snell at New Relic]

Here at New Relic, code performance is of paramount importance. However, in today’s complex world of multithreaded mobile applications, we often find ourselves sacrificing small amounts of performance in the face of memory consistency. Designing around and later debugging race conditions can be extremely time consuming and frustrating so it’s not uncommon to become a little overzealous with locking. Fortunately, there are a few simple patterns that make locking effective without unnecessarily impacting performance.

Let’s start by taking a look at a basic class with a simple setter:

public class Foo {
    private Map<String, String> data;

    public Foo() {
        data = new HashMap<String, String>();
    }

    public void setData(String key, String value) {
        data.put(key, value);
    }
}

In this case, every time we allocate a new instance of Foo, we’re allocating a new HashMap regardless of whether we’re using it. Normally, on a powerful server, this allocation is relatively cheap. However, on a device that fits in your pocket and runs all day on a battery, allocations start to add up quickly. To speed things up, let’s rewrite this class using lazy allocation:

public class Foo {
    private Map<String, String> data;

    public Foo() { }

    public void setData(String key, String value) {
        if (data == null)
            data = new HashMap<String, String>();

        data.put(key, value);
    }
}

Now, the constructor is virtually a no-op and we’ll only incur the cost of allocating a HashMap for data when there’s a good reason. At this point, we have a fast implementation and we’re only using memory when absolutely necessary. However, this implementation is far from thread safe. Since threading on Android is required on a fundamental level (you can’t IO block on the UI thread), it’s always important to keep thread safety in mind.

In our example, there are two places we need to be extra careful. First, we need a thread safe data structure. Using ConcurrentHashMap in place of HashMap easily solves this. However, the second, more subtle race condition is in the allocation of our data field. It’s possible for two threads to check if data is null at the same time and try to allocate the map. Worse yet, one of the threads may put an object in its map and the map is lost after the second thread stores its version of the map. To prevent this race, we can always just use a synchronized block:

public class Foo {
    private Map<String, String> data;

    public Foo() { }

    public void setData(String key, String value) {
        synchronized (this) {
            if (data == null)
                data = new ConcurrentHashMap<String, String>();
        }

        data.put(key, value);
    }
}

This will ensure only one thread at a time can check and, if necessary, allocate a new map for the data field. However, you’ll recall we’re interested in optimizing for performance and, unfortunately, synchronized blocks tend to be expensive. In this case, only one thread can effectively call our setData() method at a time.  Fortunately, there is another pattern we can employ: double-checked locking. There is an excellent Wikipedia article on the general approach here.  In our case, the approach looks like this:

public class Foo {
    private volatile Map<String, String> data;

    public Foo() { }

    public void setData(String key, String value) {
        if (data == null) {
            synchronized (this) {
                if (data == null)
                    data = new ConcurrentHashMap<String, String>();
            }
        }

        data.put(key, value);
    }
}

There are two important changes here. The first is adding the volatile modifier to the data field. This will instruct the compiler and eventually the Dalvik VM to ensure reads and writes are performed in a happened-before order (see here for more details). In other words, writes to the field always happen before reads (without this keyword, the compiler or JIT optimizer may decide to reverse them). Next, we add another null check for data outside the synchronized block. This will ensure once data has been allocated, we never synchronize again. However, if data is indeed null, we synchronize and then double-check if data is null just to make sure another thread hasn’t assigned a value between the two checks. If we lost the race, we’ll just continue normally and exit the synchronized block.

The observant and perhaps wiser developer will note that this pattern wasn’t always reliable and was only fixed in Java 1.5. The Dalvik VM follows a similar history, check it out here. I’d also highly recommend reading this excellent guide to memory consistency on Android.

Having performance issues of your own?  Take New Relic Mobile’s Android SDK for a spin!

Optimize_Android_App_Performance___New_Relic

Want to use New Relic and get an awesome Nerd Life tee?
Sign up here. It's free, so why not?

Learn tips and best practices for optimizing your capacity management strategy with the Market Guide for Capacity Management, brought to you in partnership with BMC.

Topics:

Published at DZone with permission of Leigh Shevchik, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}