What is the difference between memoization and dynamic programming? - dynamic-programming

What is the difference between memoization and dynamic programming? I think dynamic programming is a subset of memoization. Is it right?

Relevant article on Programming.Guide: Dynamic programming vs memoization vs tabulation
What is difference between memoization and dynamic programming?
Memoization is a term describing an optimization technique where you cache previously computed results, and return the cached result when the same computation is needed again.
Dynamic programming is a technique for solving problems of recursive nature, iteratively and is applicable when the computations of the subproblems overlap.
Dynamic programming is typically implemented using tabulation, but can also be implemented using memoization. So as you can see, neither one is a "subset" of the other.
A reasonable follow-up question is: What is the difference between tabulation (the typical dynamic programming technique) and memoization?
When you solve a dynamic programming problem using tabulation you solve the problem "bottom up", i.e., by solving all related sub-problems first, typically by filling up an n-dimensional table. Based on the results in the table, the solution to the "top" / original problem is then computed.
If you use memoization to solve the problem you do it by maintaining a map of already solved sub problems. You do it "top down" in the sense that you solve the "top" problem first (which typically recurses down to solve the sub-problems).
A good slide from here (link is now dead, slide is still good though):
If all subproblems must be solved at least once, a bottom-up dynamic-programming algorithm usually outperforms a top-down memoized algorithm by a constant factor
No overhead for recursion and less overhead for maintaining table
There are some problems for which the regular pattern of table accesses in the dynamic-programming algorithm can be exploited to reduce the time or space requirements even further
If some subproblems in the subproblem space need not be solved at all, the memoized solution has the advantage of solving only those subproblems that are definitely required
Additional resources:
Wikipedia: Memoization, Dynamic Programming
Related SO Q/A: Memoization or Tabulation approach for Dynamic programming

Dynamic Programming is an algorithmic paradigm that solves a given
complex problem by breaking it into subproblems and stores the results
of subproblems to avoid computing the same results again.
http://www.geeksforgeeks.org/dynamic-programming-set-1/
Memoization is an easy method to track previously solved solutions (often implemented as a hash key value pair, as opposed to tabulation which is often based on arrays) so that they aren't recalculated when they are encountered again. It can be used in both bottom up or top down methods.
See this discussion on memoization vs tabulation.
So Dynamic programming is a method to solve certain classes of problems by solving recurrence relations/recursion and storing previously found solutions via either tabulation or memoization. Memoization is a method to keep track of solutions to previously solved problems and can be used with any function that has unique deterministic solutions for a given set of inputs.

Both Memoization and Dynamic Programming solves individual subproblem only once.
Memoization uses recursion and works top-down, whereas Dynamic programming moves in opposite direction solving the problem bottom-up.
Below is an interesting analogy -
Top-down - First you say I will take over the world. How will you do that? You say I will take over Asia first. How will you do that? I will take over India first. I will become the Chief Minister of Delhi, etc. etc.
Bottom-up - You say I will become the CM of Delhi. Then will take over India, then all other countries in Asia and finally I will take over the world.

Dynamic Programming is often called Memoization!
Memoization is the top-down technique(start solving the given problem by breaking it down) and dynamic programming is a bottom-up technique(start solving from the trivial sub-problem, up towards the given problem)
DP finds the solution by starting from the base case(s) and works its way upwards.
DP solves all the sub-problems, because it does it bottom-up
Unlike Memoization, which solves only the needed sub-problems
DP has the potential to transform exponential-time brute-force solutions into polynomial-time algorithms.
DP may be much more efficient because its iterative
On the contrary, Memoization must pay for the (often significant) overhead due to recursion.
To be more simple,
Memoization uses the top-down approach to solve the problem i.e. it begin with core(main) problem then breaks it into sub-problems and solve these sub-problems similarly. In this approach same sub-problem can occur multiple times and consume more CPU cycle, hence increase the time complexity. Whereas in Dynamic programming same sub-problem will not be solved multiple times but the prior result will be used to optimize the solution.

(1) Memoization and DP, conceptually, is really the same thing. Because: consider the definition of DP: "overlapping subproblems" "and optimal substructure". Memoization fully possesses these 2.
(2) Memoization is DP with the risk of stack overflow is the recursion is deep. DP bottom up does not have this risk.
(3) Memoization needs a hash table. So additional space, and some lookup time.
So to answer the question:
-Conceptually, (1) means they are the same thing.
-Taking (2) into account, if you really want, memoization is a subset of DP, in a sense that a problem solvable by memoization will be solvable by DP, but a problem solvable by DP might not be solvable by memoization (because it might stack overflow).
-Taking (3) into account, they have minor differences in performance.

From wikipedia:
Memoization
In computing, memoization is an optimization technique used primarily
to speed up computer programs by having function calls avoid repeating
the calculation of results for previously-processed inputs.
Dynamic Programming
In mathematics and computer science, dynamic programming is a method
for solving complex problems by breaking them down into simpler
subproblems.
When breaking a problem into smaller/simpler subproblems, we often encounter the same subproblem more then once - so we use Memoization to save results of previous calculations so we don't need to repeat them.
Dynamic programming often encounters situations where it makes sense to use memoization but You can use either technique without necessarily using the other.

I would like to go with an example;
Problem:
You are climbing a stair case. It takes n steps to reach to the top.
Each time you can either climb 1 or 2 steps. In how many distinct ways
can you climb to the top?
Recursion with Memoization
In this way we are pruning (a removal of excess material from a tree or shrub) recursion tree with the help of memo array and reducing the size of recursion tree upto nn.
public class Solution {
public int climbStairs(int n) {
int memo[] = new int[n + 1];
return climb_Stairs(0, n, memo);
}
public int climb_Stairs(int i, int n, int memo[]) {
if (i > n) {
return 0;
}
if (i == n) {
return 1;
}
if (memo[i] > 0) {
return memo[i];
}
memo[i] = climb_Stairs(i + 1, n, memo) + climb_Stairs(i + 2, n, memo);
return memo[i];
}
}
Dynamic Programming
As we can see this problem can be broken into subproblems, and it contains the optimal substructure property i.e. its optimal solution can be constructed efficiently from optimal solutions of its subproblems, we can use dynamic programming to solve this problem.
public class Solution {
public int climbStairs(int n) {
if (n == 1) {
return 1;
}
int[] dp = new int[n + 1];
dp[1] = 1;
dp[2] = 2;
for (int i = 3; i <= n; i++) {
dp[i] = dp[i - 1] + dp[i - 2];
}
return dp[n];
}
}
Examples take from https://leetcode.com/problems/climbing-stairs/

Just think of two ways,
We break down the bigger problem into smaller sub problems - Top down approach.
We start from smallest sub problem and reach the bigger problem - Bottom up approach.
In Memoization we go with (1.) where we save each function call in a cache and call back from there. Its a bit expensive as it involves recursive calls.
In Dynamic Programming we go with (2.) where we maintain a table, bottom up by solving subproblems using the data saved in the table, commonly referred as the dp-table.
Note:
Both are applicable to problems with Overlapping sub-problems.
Memoization performs comparatively poor to DP due to the overheads involved during recursive function calls.
The asymptotic time-complexity remains the same.

There're some similarities between dynamic programming (DP) and memoization and in most cases you can implement a dynamic programming process by memoization and vise versa. But they do have some differences and you should check them out when deciding which approach to use:
Memoization is a top-down approach during which you decompose a big problem into smaller-size subproblems with the same properties and when the size is small enough you can easily solve it by bruteforcing. Dynamic Programming is a bottom-up approach during which you firstly calculate the answer of small cases and then use them to construct the answer of big cases.
During coding, usually memoization is implemented by recursion while dynamic programming does calculation by iteration. So if you have carefully calculate the space and time complexity of your algorithm, using dynamic-programming-style implementation can offer you better performance.
There do exist situations where using memoization has advantages. Dynamic programming needs to calculate every subproblem because it doesn't know which one will be useful in the future. But memoization only calculate the subproblems related to the original problem. Sometimes you may design a DP algorithm with theoretically tremendous amount of dp status. But by careful analyses you find that only an acceptable amount of them will be used. In this situation it's preferred to use memoization to avoid huge execution time.

In Dynamic Programming ,
No overhead for recursion, less overhead for maintaining the table.
The regular pattern of the table accesses may be used to reduce time or space requirements.
In Memorization,
Some subproblems do not need to be solved.

Here is a sample of Memoization and DP from Fibonacci Number problem written in Java.
Dynamic Programming here is not involving the recursion, as result faster and can calculate higher values because it is not limited by the execution stack.
public class Solution {
public static long fibonacciMemoization(int i) {
return fibonacciMemoization(i, new long[i + 1]);
}
public static long fibonacciMemoization(int i, long[] memo) {
if (i <= 1) {
return 1;
}
if (memo[i] != 0) {
return memo[i];
}
long val = fibonacciMemoization(i - 1, memo) + fibonacciMemoization(i - 2, memo);
memo[i] = val;
return val;
}
public static long fibonacciDynamicPrograming(int i) {
if (i <= 1) {
return i;
}
long[] memo = new long[i + 1];
memo[0] = 1;
memo[1] = 1;
memo[2] = 2;
for (int j = 3; j <= i; j++) {
memo[j] = memo[j - 1] + memo[j - 2];
}
return memo[i];
}
public static void main(String[] args) {
System.out.println("Fibonacci with Dynamic Programing");
System.out.println(fibonacciDynamicPrograming(10));
System.out.println(fibonacciDynamicPrograming(1_000_000));
System.out.println("Fibonacci with Memoization");
System.out.println(fibonacciMemoization(10));
System.out.println(fibonacciMemoization(1_000_000)); //stackoverflow exception
}
}

Dynamic Programming is an optimization over plain recursive algorithm which consider all the combination of the input to provide the most suitable answer. This approach has one drawback, its huge time complexity. It can be made more efficient by the use of memoization. It will store every output of a subproblem and will directly give an answer, whenever that algorithm tries to solve that subproblem again. This can make the algorithm have polynomial time complexity.

Related

Running time - Dynamic programming algorithm

Any dynamic programming algorithm that solves N sub problems in the process of computing it's final answer must run in Ω(N) time.
Is this statement true? I am thinking that it is indeed true as i need to compute every sub problem. Please let me know if i am wrong
The short answer is no. Dynamic programming is more of a strategy to boost up performance/shorten runtime complexity than an actual algorithm. Without knowing the actual algorithm for a specific problem, it's not possible to say anything about time complexity.
The idea of DP is to use memoization(by consuming some space) to speed up exisiting algorithm. Moreover, every algorithm that you can apply DP may speed up in different ways. Without re-computing the same subtask multiple time, you will have to store intermediate results in another data structure. If the result is needed again in your data strucutre, you will directly return intermediate results you've stored
With that being said, the time complexity of DP problems is the number of unique states/subproblems * time taken per state.
Here's one example when DP solves N sub problems and computation is not Ω(N).
let's assume your DP requires O(n) subproblems and evaluating each subproblem costs an O(logn) binary search plus constant time operations.
Then the overall algorithm would take O(n*logn).

memoization vs dynamic programming space complexity

I want to know for a problem say LCS, we can reduce space complexity for a dp solution because when we are filling the table in dp we just use either dp[i - 1][j] or dp[i][j - 1] to fill dp[i][j] as instead of having a dp table of size m X n.
We can solve this using dp[2][n] and switch states states while calculating. Can this be possible with memorization to reduce space complexity to O(n + m)?
The simple answer is
NO
In bottom Up you can remove the Rows which are unnecessary just because you know those rows would not be used again....
In Memoization , Recursion calls in any order rather than a complete formal method for example: there is a call from LCS(i,j) to LCS(i-1,j) and Let this result be calculated And saved ! Now , Recursion calls LCS(k,x) (for some other case) which leads to the same subproblem LCS(i-1,j) And Now if you had removed this stored value that wont be memoizing the solution correctly...!
You cannot be certain which subproblem to memoize and not memoize.
In contrast in bottom up We are certain which subproblems would not be used again(Thats why we eliminate other rows)!

Dynamic Programming: top down versus bottom up comparison

Can you point me to some dynamic programming problem statements where bottom up is more beneficial than top down? (i.e. simple DP works more naturally but memoization would be harder to implement?)
I find recursion with memoization much easier, and want to solve problems where bottom up is a better/perhaps only feasible approach.
I understand that theoretically both are equivalent, so even something like ease of implementation would count as a benefit.
You will apply bottom up with memoization OR top down recursion with memoization depending on the problem at hand .
For example, if you have to find the minimum weight independent path of a path graph, you will use the bottom up approach as you have to solve all the subproblems that are possible.
But if you have to solve the knapsack problem , you may want to use recursive top down with memoization as you have to solve a limited number of subproblems. Approaching the knapsack problem bottom up will cause the algo to solve a lot of redundant problems that are not used in the original subproblem.
Two things to consider when deciding which algorithm to use
Time Complexity. Both approaches have the same time complexity in general, but because for loop is cheaper than recursive function calls, bottom-up can be faster if measured in machine time.
Space Complexity. (without considering extra call stack allocations during top-down) Usually both approaches need to build a table for all sub-solutions, but bottom-up is following a topological order, its cost of auxiliary space can be sometimes reduced to the size of problem's immediate dependencies. For example: fibonacci(n) = fibonacci(n-1) + fibonacci(n-2), we only need to store the past two calculations
That being said, bottom-up is not always the best choice, I will try to illustrate with examples:
(mentioned by #Nikunj Banka) top-down only solves sub-problems used by your solution whereas bottom-up might waste time on redundant sub-problems. A silly example would be 0-1 knapsack with 1 item...run time difference is O(1) vs O(weight)
you might need to perform extra work to get topological order for bottm-up. In Longest Increasing Path in Matrix, if we want to do sub-problems after their dependencies, we would have to sort all entries of the matrix in descending order, that's extra nmlog(nm) pre-processing time before DP

What is the difference between bottom-up and top-down?

The bottom-up approach (to dynamic programming) consists in first looking at the "smaller" subproblems, and then solve the larger subproblems using the solution to the smaller problems.
The top-down consists in solving the problem in a "natural manner" and check if you have calculated the solution to the subproblem before.
I'm a little confused. What is the difference between these two?
rev4: A very eloquent comment by user Sammaron has noted that, perhaps, this answer previously confused top-down and bottom-up. While originally this answer (rev3) and other answers said that "bottom-up is memoization" ("assume the subproblems"), it may be the inverse (that is, "top-down" may be "assume the subproblems" and "bottom-up" may be "compose the subproblems"). Previously, I have read on memoization being a different kind of dynamic programming as opposed to a subtype of dynamic programming. I was quoting that viewpoint despite not subscribing to it. I have rewritten this answer to be agnostic of the terminology until proper references can be found in the literature. I have also converted this answer to a community wiki. Please prefer academic sources. List of references: {Web: 1,2} {Literature: 5}
Recap
Dynamic programming is all about ordering your computations in a way that avoids recalculating duplicate work. You have a main problem (the root of your tree of subproblems), and subproblems (subtrees). The subproblems typically repeat and overlap.
For example, consider your favorite example of Fibonnaci. This is the full tree of subproblems, if we did a naive recursive call:
TOP of the tree
fib(4)
fib(3)...................... + fib(2)
fib(2)......... + fib(1) fib(1)........... + fib(0)
fib(1) + fib(0) fib(1) fib(1) fib(0)
fib(1) fib(0)
BOTTOM of the tree
(In some other rare problems, this tree could be infinite in some branches, representing non-termination, and thus the bottom of the tree may be infinitely large. Furthermore, in some problems you might not know what the full tree looks like ahead of time. Thus, you might need a strategy/algorithm to decide which subproblems to reveal.)
Memoization, Tabulation
There are at least two main techniques of dynamic programming which are not mutually exclusive:
Memoization - This is a laissez-faire approach: You assume that you have already computed all subproblems and that you have no idea what the optimal evaluation order is. Typically, you would perform a recursive call (or some iterative equivalent) from the root, and either hope you will get close to the optimal evaluation order, or obtain a proof that you will help you arrive at the optimal evaluation order. You would ensure that the recursive call never recomputes a subproblem because you cache the results, and thus duplicate sub-trees are not recomputed.
example: If you are calculating the Fibonacci sequence fib(100), you would just call this, and it would call fib(100)=fib(99)+fib(98), which would call fib(99)=fib(98)+fib(97), ...etc..., which would call fib(2)=fib(1)+fib(0)=1+0=1. Then it would finally resolve fib(3)=fib(2)+fib(1), but it doesn't need to recalculate fib(2), because we cached it.
This starts at the top of the tree and evaluates the subproblems from the leaves/subtrees back up towards the root.
Tabulation - You can also think of dynamic programming as a "table-filling" algorithm (though usually multidimensional, this 'table' may have non-Euclidean geometry in very rare cases*). This is like memoization but more active, and involves one additional step: You must pick, ahead of time, the exact order in which you will do your computations. This should not imply that the order must be static, but that you have much more flexibility than memoization.
example: If you are performing fibonacci, you might choose to calculate the numbers in this order: fib(2),fib(3),fib(4)... caching every value so you can compute the next ones more easily. You can also think of it as filling up a table (another form of caching).
I personally do not hear the word 'tabulation' a lot, but it's a very decent term. Some people consider this "dynamic programming".
Before running the algorithm, the programmer considers the whole tree, then writes an algorithm to evaluate the subproblems in a particular order towards the root, generally filling in a table.
*footnote: Sometimes the 'table' is not a rectangular table with grid-like connectivity, per se. Rather, it may have a more complicated structure, such as a tree, or a structure specific to the problem domain (e.g. cities within flying distance on a map), or even a trellis diagram, which, while grid-like, does not have a up-down-left-right connectivity structure, etc. For example, user3290797 linked a dynamic programming example of finding the maximum independent set in a tree, which corresponds to filling in the blanks in a tree.
(At it's most general, in a "dynamic programming" paradigm, I would say the programmer considers the whole tree, then writes an algorithm that implements a strategy for evaluating subproblems which can optimize whatever properties you want (usually a combination of time-complexity and space-complexity). Your strategy must start somewhere, with some particular subproblem, and perhaps may adapt itself based on the results of those evaluations. In the general sense of "dynamic programming", you might try to cache these subproblems, and more generally, try avoid revisiting subproblems with a subtle distinction perhaps being the case of graphs in various data structures. Very often, these data structures are at their core like arrays or tables. Solutions to subproblems can be thrown away if we don't need them anymore.)
[Previously, this answer made a statement about the top-down vs bottom-up terminology; there are clearly two main approaches called Memoization and Tabulation that may be in bijection with those terms (though not entirely). The general term most people use is still "Dynamic Programming" and some people say "Memoization" to refer to that particular subtype of "Dynamic Programming." This answer declines to say which is top-down and bottom-up until the community can find proper references in academic papers. Ultimately, it is important to understand the distinction rather than the terminology.]
Pros and cons
Ease of coding
Memoization is very easy to code (you can generally* write a "memoizer" annotation or wrapper function that automatically does it for you), and should be your first line of approach. The downside of tabulation is that you have to come up with an ordering.
*(this is actually only easy if you are writing the function yourself, and/or coding in an impure/non-functional programming language... for example if someone already wrote a precompiled fib function, it necessarily makes recursive calls to itself, and you can't magically memoize the function without ensuring those recursive calls call your new memoized function (and not the original unmemoized function))
Recursiveness
Note that both top-down and bottom-up can be implemented with recursion or iterative table-filling, though it may not be natural.
Practical concerns
With memoization, if the tree is very deep (e.g. fib(10^6)), you will run out of stack space, because each delayed computation must be put on the stack, and you will have 10^6 of them.
Optimality
Either approach may not be time-optimal if the order you happen (or try to) visit subproblems is not optimal, specifically if there is more than one way to calculate a subproblem (normally caching would resolve this, but it's theoretically possible that caching might not in some exotic cases). Memoization will usually add on your time-complexity to your space-complexity (e.g. with tabulation you have more liberty to throw away calculations, like using tabulation with Fib lets you use O(1) space, but memoization with Fib uses O(N) stack space).
Advanced optimizations
If you are also doing a extremely complicated problems, you might have no choice but to do tabulation (or at least take a more active role in steering the memoization where you want it to go). Also if you are in a situation where optimization is absolutely critical and you must optimize, tabulation will allow you to do optimizations which memoization would not otherwise let you do in a sane way. In my humble opinion, in normal software engineering, neither of these two cases ever come up, so I would just use memoization ("a function which caches its answers") unless something (such as stack space) makes tabulation necessary... though technically to avoid a stack blowout you can 1) increase the stack size limit in languages which allow it, or 2) eat a constant factor of extra work to virtualize your stack (ick), or 3) program in continuation-passing style, which in effect also virtualizes your stack (not sure the complexity of this, but basically you will effectively take the deferred call chain from the stack of size N and de-facto stick it in N successively nested thunk functions... though in some languages without tail-call optimization you may have to trampoline things to avoid a stack blowout).
More complicated examples
Here we list examples of particular interest, that are not just general DP problems, but interestingly distinguish memoization and tabulation. For example, one formulation might be much easier than the other, or there may be an optimization which basically requires tabulation:
the algorithm to calculate edit-distance[4], interesting as a non-trivial example of a two-dimensional table-filling algorithm
Top down and bottom up DP are two different ways of solving the same problems. Consider a memoized (top down) vs dynamic (bottom up) programming solution to computing fibonacci numbers.
fib_cache = {}
def memo_fib(n):
global fib_cache
if n == 0 or n == 1:
return 1
if n in fib_cache:
return fib_cache[n]
ret = memo_fib(n - 1) + memo_fib(n - 2)
fib_cache[n] = ret
return ret
def dp_fib(n):
partial_answers = [1, 1]
while len(partial_answers) <= n:
partial_answers.append(partial_answers[-1] + partial_answers[-2])
return partial_answers[n]
print memo_fib(5), dp_fib(5)
I personally find memoization much more natural. You can take a recursive function and memoize it by a mechanical process (first lookup answer in cache and return it if possible, otherwise compute it recursively and then before returning, you save the calculation in the cache for future use), whereas doing bottom up dynamic programming requires you to encode an order in which solutions are calculated, such that no "big problem" is computed before the smaller problem that it depends on.
A key feature of dynamic programming is the presence of overlapping subproblems. That is, the problem that you are trying to solve can be broken into subproblems, and many of those subproblems share subsubproblems. It is like "Divide and conquer", but you end up doing the same thing many, many times. An example that I have used since 2003 when teaching or explaining these matters: you can compute Fibonacci numbers recursively.
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
Use your favorite language and try running it for fib(50). It will take a very, very long time. Roughly as much time as fib(50) itself! However, a lot of unnecessary work is being done. fib(50) will call fib(49) and fib(48), but then both of those will end up calling fib(47), even though the value is the same. In fact, fib(47) will be computed three times: by a direct call from fib(49), by a direct call from fib(48), and also by a direct call from another fib(48), the one that was spawned by the computation of fib(49)... So you see, we have overlapping subproblems.
Great news: there is no need to compute the same value many times. Once you compute it once, cache the result, and the next time use the cached value! This is the essence of dynamic programming. You can call it "top-down", "memoization", or whatever else you want. This approach is very intuitive and very easy to implement. Just write a recursive solution first, test it on small tests, add memoization (caching of already computed values), and --- bingo! --- you are done.
Usually you can also write an equivalent iterative program that works from the bottom up, without recursion. In this case this would be the more natural approach: loop from 1 to 50 computing all the Fibonacci numbers as you go.
fib[0] = 0
fib[1] = 1
for i in range(48):
fib[i+2] = fib[i] + fib[i+1]
In any interesting scenario the bottom-up solution is usually more difficult to understand. However, once you do understand it, usually you'd get a much clearer big picture of how the algorithm works. In practice, when solving nontrivial problems, I recommend first writing the top-down approach and testing it on small examples. Then write the bottom-up solution and compare the two to make sure you are getting the same thing. Ideally, compare the two solutions automatically. Write a small routine that would generate lots of tests, ideally -- all small tests up to certain size --- and validate that both solutions give the same result. After that use the bottom-up solution in production, but keep the top-bottom code, commented out. This will make it easier for other developers to understand what it is that you are doing: bottom-up code can be quite incomprehensible, even you wrote it and even if you know exactly what you are doing.
In many applications the bottom-up approach is slightly faster because of the overhead of recursive calls. Stack overflow can also be an issue in certain problems, and note that this can very much depend on the input data. In some cases you may not be able to write a test causing a stack overflow if you don't understand dynamic programming well enough, but some day this may still happen.
Now, there are problems where the top-down approach is the only feasible solution because the problem space is so big that it is not possible to solve all subproblems. However, the "caching" still works in reasonable time because your input only needs a fraction of the subproblems to be solved --- but it is too tricky to explicitly define, which subproblems you need to solve, and hence to write a bottom-up solution. On the other hand, there are situations when you know you will need to solve all subproblems. In this case go on and use bottom-up.
I would personally use top-bottom for Paragraph optimization a.k.a the Word wrap optimization problem (look up the Knuth-Plass line-breaking algorithms; at least TeX uses it, and some software by Adobe Systems uses a similar approach). I would use bottom-up for the Fast Fourier Transform.
Lets take fibonacci series as an example
1,1,2,3,5,8,13,21....
first number: 1
Second number: 1
Third Number: 2
Another way to put it,
Bottom(first) number: 1
Top (Eighth) number on the given sequence: 21
In case of first five fibonacci number
Bottom(first) number :1
Top (fifth) number: 5
Now lets take a look of recursive Fibonacci series algorithm as an example
public int rcursive(int n) {
if ((n == 1) || (n == 2)) {
return 1;
} else {
return rcursive(n - 1) + rcursive(n - 2);
}
}
Now if we execute this program with following commands
rcursive(5);
if we closely look into the algorithm, in-order to generate fifth number it requires 3rd and 4th numbers. So my recursion actually start from top(5) and then goes all the way to bottom/lower numbers. This approach is actually top-down approach.
To avoid doing same calculation multiple times we use Dynamic Programming techniques. We store previously computed value and reuse it. This technique is called memoization. There are more to Dynamic programming other then memoization which is not needed to discuss current problem.
Top-Down
Lets rewrite our original algorithm and add memoized techniques.
public int memoized(int n, int[] memo) {
if (n <= 2) {
return 1;
} else if (memo[n] != -1) {
return memo[n];
} else {
memo[n] = memoized(n - 1, memo) + memoized(n - 2, memo);
}
return memo[n];
}
And we execute this method like following
int n = 5;
int[] memo = new int[n + 1];
Arrays.fill(memo, -1);
memoized(n, memo);
This solution is still top-down as algorithm start from top value and go to bottom each step to get our top value.
Bottom-Up
But, question is, can we start from bottom, like from first fibonacci number then walk our way to up. Lets rewrite it using this techniques,
public int dp(int n) {
int[] output = new int[n + 1];
output[1] = 1;
output[2] = 1;
for (int i = 3; i <= n; i++) {
output[i] = output[i - 1] + output[i - 2];
}
return output[n];
}
Now if we look into this algorithm it actually start from lower values then go to top. If i need 5th fibonacci number i am actually calculating 1st, then second then third all the way to up 5th number. This techniques actually called bottom-up techniques.
Last two, algorithms full-fill dynamic programming requirements. But one is top-down and another one is bottom-up. Both algorithm has similar space and time complexity.
Dynamic Programming is often called Memoization!
1.Memoization is the top-down technique(start solving the given problem by breaking it down) and dynamic programming is a bottom-up technique(start solving from the trivial sub-problem, up towards the given problem)
2.DP finds the solution by starting from the base case(s) and works its way upwards. DP solves all the sub-problems, because it does it bottom-up
Unlike Memoization, which solves only the needed sub-problems
DP has the potential to transform exponential-time brute-force solutions into polynomial-time algorithms.
DP may be much more efficient because its iterative
On the contrary, Memoization must pay for the (often significant) overhead due to recursion.
To be more simple, Memoization uses the top-down approach to solve the problem i.e. it begin with core(main) problem then breaks it into sub-problems and solve these sub-problems similarly. In this approach same sub-problem can occur multiple times and consume more CPU cycle, hence increase the time complexity. Whereas in Dynamic programming same sub-problem will not be solved multiple times but the prior result will be used to optimize the solution.
Dynamic programming problems can be solved using either bottom-up or top-down approaches.
Generally, the bottom-up approach uses the tabulation technique, while the top-down approach uses the recursion (with memorization) technique.
But you can also have bottom-up and top-down approaches using recursion as shown below.
Bottom-Up: Start with the base condition and pass the value calculated until now recursively. Generally, these are tail recursions.
int n = 5;
fibBottomUp(1, 1, 2, n);
private int fibBottomUp(int i, int j, int count, int n) {
if (count > n) return 1;
if (count == n) return i + j;
return fibBottomUp(j, i + j, count + 1, n);
}
Top-Down: Start with the final condition and recursively get the result of its sub-problems.
int n = 5;
fibTopDown(n);
private int fibTopDown(int n) {
if (n <= 1) return 1;
return fibTopDown(n - 1) + fibTopDown(n - 2);
}
Simply saying top down approach uses recursion for calling Sub problems again and again where as bottom up approach use the single without calling any one and hence it is more efficient.
Following is the DP based solution for Edit Distance problem which is top down. I hope it will also help in understanding the world of Dynamic Programming:
public int minDistance(String word1, String word2) {//Standard dynamic programming puzzle.
int m = word2.length();
int n = word1.length();
if(m == 0) // Cannot miss the corner cases !
return n;
if(n == 0)
return m;
int[][] DP = new int[n + 1][m + 1];
for(int j =1 ; j <= m; j++) {
DP[0][j] = j;
}
for(int i =1 ; i <= n; i++) {
DP[i][0] = i;
}
for(int i =1 ; i <= n; i++) {
for(int j =1 ; j <= m; j++) {
if(word1.charAt(i - 1) == word2.charAt(j - 1))
DP[i][j] = DP[i-1][j-1];
else
DP[i][j] = Math.min(Math.min(DP[i-1][j], DP[i][j-1]), DP[i-1][j-1]) + 1; // Main idea is this.
}
}
return DP[n][m];
}
You can think of its recursive implementation at your home. It's quite good and challenging if you haven't solved something like this before.
nothing to be confused about... you usually learn the language in bottom-up manner (from basics to more complicated things), and often make your project in top-down manner (from overall goal & structure of the code to certain pieces of implementations)

Alpha-beta tree search without recursion

I'd like to see an implementation of an alpha-beta search (negamax to be more precise) without recursion. I know the basic idea - to use one or more stacks to keep track of the levels, but having a real code would spare me a lot of time.
Having it in Java, C# or Javascript would be perfect, but C/C++ is fine.
Here's the (simplified) recursive code:
function search(crtDepth, alpha, beta)
{
if (crtDepth == 0)
return eval(board);
var moves = generateMoves(board);
var crtMove;
var score = 200000;
var i;
while (i<moves.length)
{
crtMove = moves.moveList[i++];
doMove(board, crtMove);
score = -search(crtDepth-1, -beta, -alpha);
undoMove(board, crtMove);
if (score > alpha)
{
if (score >= beta)
return beta;
alpha = score;
}
}
return alpha;
}
search(4, -200000, 200000);
Knuth and Moore published an iterative alpha-beta routine in 1975 using an ad-hoc Algol language.
An Analysis of Alpha Beta Pruning (Page 301)
Also in Chapter 9 of "Selected Papers on Analysis of Algorithms"
It doesn't look very easy to covert into C# but it might help someone who wants to do it for the pure joy of optimization.
I'm very new to chess programming so it's beyond my abilities. Plus, my biggest performance gain was when I switched from "Copy-Make" to "Make-Unmake". I'm using XNA, so getting my GC latency down to almost 0 fixed all my performance issues, now it runs faster on my 360 than it does on my PC so this optimization seems too difficult to attempt for my needs.
Also see Recursion to Iteration
For a more recent bit of code, I wrote a non-recursive Negamax routine as an option in the EasyAI python library. The specific source code is at:
https://github.com/Zulko/easyAI/blob/master/easyAI/AI/NonRecursiveNegamax.py
It uses a simple loop with a fixed array of objects (size determined by target depth) to move up and down the tree in an ordered fashion. For the particular project I was using it on, it was six times faster than the recursive version. But I'm sure each game would respond differently.
There is no way to deny that this is some dense and complex code and conversion to C/Java/C# will be ... challenging. It is pretty much nothing but border cases. :)
If you convert it to C/Java/C#, I would love to see the results. Place an link in the comment?

Resources