# Memoization: Make Recursive Algorithms Efficient

### Learn how to use a technique called memoization to make recursive algorithms efficient through dynamic programming.

· Performance Zone · Tutorial
Save
14.40K Views

Memoization is a technique for implementing dynamic programming to make recursive algorithms efficient. It often has the same benefits as regular dynamic programming without requiring major changes to the original more natural recursive algorithm.

## Basic Idea

• The first thing is to design the natural recursive algorithm.
• If recursive calls with the same arguments are repeatedly made, then the inefficient recursive algorithm can be memoized by saving these subproblem solutions in a table so they do not have to be recomputed.

## Implementation

To implement memoization to recursive algorithms, a table is maintained with subproblem solutions, but the control structure for filling in the table occurs during normal execution of the recursive algorithm. This can be summarized in steps:

1. A memoized recursive algorithm maintains an entry in a table for the solution to each of subproblem,
2. Each table entry initially contains a special value to indicate that entry has yet to be filled in.
3. When the subproblem is first encountered, its solution is computed and stored in the table.
4. Subsequently, the value is looked up rather than computed

To illustrate the steps above, let's take an example for computing nth Fibonacci number with a recursive algorithm as:

``````// without memoization
static int fib(int n) {
if (n == 0 || n == 1) return n;

return fib(n - 1) + fib(n - 2);
}``````

The runtime for the above algorithm is roughly O(2n). This can be visualized for fib(5) as below:

Memoization.

In the above tree, you can see the same values (for example fib(1), fib(0).. ) are being computed repeatedly. In fact, each time we compute fib (i), we could just cache this result and use it later. So, when we call fib (n), we shouldn't have to do much more than 0 (n) calls, since there's only O(n) possible values we can throw at fib(n).

``````// memoized version
static int fibonacciMemo(int n) {
// entry table to cache the computed values
int[] fibs = new int[n + 1];
// initialize entry table with -1 to say value has to calculated
Arrays.fill(fibs, -1);

return fib(n, fibs);
}

static int fib(int n, int[] fibs) {
if (n == 0 || n == 1) return n;

if (fibs[n] == -1) {
fibs[n] = fib(n - 1, fibs) + fib(n - 2, fibs);
}

return fibs[n];
}``````

You can see, how easily with just a small modification, were able to make this function to run in O(n) time as:

• The algorithm does not have to be transformed into an iterative one.
• It often offers the same (or better) efficiency as usual dynamic programming approach.
• For calculation of the 45th Fibonacci number, it took 5136ms in my machine (i7 Octa-core):
``````long startTime = System.currentTimeMillis();
System.out.println(fib(45));
System.out.println("Time taken (w/o memo): " + (System.currentTimeMillis() - startTime) + " ms");``````
But with memoization, it is 0 ms.

This method is also sometimes called Top-Down Dynamic Programming.

Isn't this a cool thing for making iterative algorithms efficient?

All source code presented above is available on GitHub.

Topics:
memoization, algorithms, performance, tutorial, performance optimization

Opinions expressed by DZone contributors are their own.