What is the difference between bottom-up and top-down? - dynamic-programming
The bottom-up approach (to dynamic programming) consists in first looking at the "smaller" subproblems, and then solve the larger subproblems using the solution to the smaller problems.
The top-down consists in solving the problem in a "natural manner" and check if you have calculated the solution to the subproblem before.
I'm a little confused. What is the difference between these two?
rev4: A very eloquent comment by user Sammaron has noted that, perhaps, this answer previously confused top-down and bottom-up. While originally this answer (rev3) and other answers said that "bottom-up is memoization" ("assume the subproblems"), it may be the inverse (that is, "top-down" may be "assume the subproblems" and "bottom-up" may be "compose the subproblems"). Previously, I have read on memoization being a different kind of dynamic programming as opposed to a subtype of dynamic programming. I was quoting that viewpoint despite not subscribing to it. I have rewritten this answer to be agnostic of the terminology until proper references can be found in the literature. I have also converted this answer to a community wiki. Please prefer academic sources. List of references: {Web: 1,2} {Literature: 5}
Recap
Dynamic programming is all about ordering your computations in a way that avoids recalculating duplicate work. You have a main problem (the root of your tree of subproblems), and subproblems (subtrees). The subproblems typically repeat and overlap.
For example, consider your favorite example of Fibonnaci. This is the full tree of subproblems, if we did a naive recursive call:
TOP of the tree
fib(4)
fib(3)...................... + fib(2)
fib(2)......... + fib(1) fib(1)........... + fib(0)
fib(1) + fib(0) fib(1) fib(1) fib(0)
fib(1) fib(0)
BOTTOM of the tree
(In some other rare problems, this tree could be infinite in some branches, representing non-termination, and thus the bottom of the tree may be infinitely large. Furthermore, in some problems you might not know what the full tree looks like ahead of time. Thus, you might need a strategy/algorithm to decide which subproblems to reveal.)
Memoization, Tabulation
There are at least two main techniques of dynamic programming which are not mutually exclusive:
Memoization - This is a laissez-faire approach: You assume that you have already computed all subproblems and that you have no idea what the optimal evaluation order is. Typically, you would perform a recursive call (or some iterative equivalent) from the root, and either hope you will get close to the optimal evaluation order, or obtain a proof that you will help you arrive at the optimal evaluation order. You would ensure that the recursive call never recomputes a subproblem because you cache the results, and thus duplicate sub-trees are not recomputed.
example: If you are calculating the Fibonacci sequence fib(100), you would just call this, and it would call fib(100)=fib(99)+fib(98), which would call fib(99)=fib(98)+fib(97), ...etc..., which would call fib(2)=fib(1)+fib(0)=1+0=1. Then it would finally resolve fib(3)=fib(2)+fib(1), but it doesn't need to recalculate fib(2), because we cached it.
This starts at the top of the tree and evaluates the subproblems from the leaves/subtrees back up towards the root.
Tabulation - You can also think of dynamic programming as a "table-filling" algorithm (though usually multidimensional, this 'table' may have non-Euclidean geometry in very rare cases*). This is like memoization but more active, and involves one additional step: You must pick, ahead of time, the exact order in which you will do your computations. This should not imply that the order must be static, but that you have much more flexibility than memoization.
example: If you are performing fibonacci, you might choose to calculate the numbers in this order: fib(2),fib(3),fib(4)... caching every value so you can compute the next ones more easily. You can also think of it as filling up a table (another form of caching).
I personally do not hear the word 'tabulation' a lot, but it's a very decent term. Some people consider this "dynamic programming".
Before running the algorithm, the programmer considers the whole tree, then writes an algorithm to evaluate the subproblems in a particular order towards the root, generally filling in a table.
*footnote: Sometimes the 'table' is not a rectangular table with grid-like connectivity, per se. Rather, it may have a more complicated structure, such as a tree, or a structure specific to the problem domain (e.g. cities within flying distance on a map), or even a trellis diagram, which, while grid-like, does not have a up-down-left-right connectivity structure, etc. For example, user3290797 linked a dynamic programming example of finding the maximum independent set in a tree, which corresponds to filling in the blanks in a tree.
(At it's most general, in a "dynamic programming" paradigm, I would say the programmer considers the whole tree, then writes an algorithm that implements a strategy for evaluating subproblems which can optimize whatever properties you want (usually a combination of time-complexity and space-complexity). Your strategy must start somewhere, with some particular subproblem, and perhaps may adapt itself based on the results of those evaluations. In the general sense of "dynamic programming", you might try to cache these subproblems, and more generally, try avoid revisiting subproblems with a subtle distinction perhaps being the case of graphs in various data structures. Very often, these data structures are at their core like arrays or tables. Solutions to subproblems can be thrown away if we don't need them anymore.)
[Previously, this answer made a statement about the top-down vs bottom-up terminology; there are clearly two main approaches called Memoization and Tabulation that may be in bijection with those terms (though not entirely). The general term most people use is still "Dynamic Programming" and some people say "Memoization" to refer to that particular subtype of "Dynamic Programming." This answer declines to say which is top-down and bottom-up until the community can find proper references in academic papers. Ultimately, it is important to understand the distinction rather than the terminology.]
Pros and cons
Ease of coding
Memoization is very easy to code (you can generally* write a "memoizer" annotation or wrapper function that automatically does it for you), and should be your first line of approach. The downside of tabulation is that you have to come up with an ordering.
*(this is actually only easy if you are writing the function yourself, and/or coding in an impure/non-functional programming language... for example if someone already wrote a precompiled fib function, it necessarily makes recursive calls to itself, and you can't magically memoize the function without ensuring those recursive calls call your new memoized function (and not the original unmemoized function))
Recursiveness
Note that both top-down and bottom-up can be implemented with recursion or iterative table-filling, though it may not be natural.
Practical concerns
With memoization, if the tree is very deep (e.g. fib(10^6)), you will run out of stack space, because each delayed computation must be put on the stack, and you will have 10^6 of them.
Optimality
Either approach may not be time-optimal if the order you happen (or try to) visit subproblems is not optimal, specifically if there is more than one way to calculate a subproblem (normally caching would resolve this, but it's theoretically possible that caching might not in some exotic cases). Memoization will usually add on your time-complexity to your space-complexity (e.g. with tabulation you have more liberty to throw away calculations, like using tabulation with Fib lets you use O(1) space, but memoization with Fib uses O(N) stack space).
Advanced optimizations
If you are also doing a extremely complicated problems, you might have no choice but to do tabulation (or at least take a more active role in steering the memoization where you want it to go). Also if you are in a situation where optimization is absolutely critical and you must optimize, tabulation will allow you to do optimizations which memoization would not otherwise let you do in a sane way. In my humble opinion, in normal software engineering, neither of these two cases ever come up, so I would just use memoization ("a function which caches its answers") unless something (such as stack space) makes tabulation necessary... though technically to avoid a stack blowout you can 1) increase the stack size limit in languages which allow it, or 2) eat a constant factor of extra work to virtualize your stack (ick), or 3) program in continuation-passing style, which in effect also virtualizes your stack (not sure the complexity of this, but basically you will effectively take the deferred call chain from the stack of size N and de-facto stick it in N successively nested thunk functions... though in some languages without tail-call optimization you may have to trampoline things to avoid a stack blowout).
More complicated examples
Here we list examples of particular interest, that are not just general DP problems, but interestingly distinguish memoization and tabulation. For example, one formulation might be much easier than the other, or there may be an optimization which basically requires tabulation:
the algorithm to calculate edit-distance[4], interesting as a non-trivial example of a two-dimensional table-filling algorithm
Top down and bottom up DP are two different ways of solving the same problems. Consider a memoized (top down) vs dynamic (bottom up) programming solution to computing fibonacci numbers.
fib_cache = {}
def memo_fib(n):
global fib_cache
if n == 0 or n == 1:
return 1
if n in fib_cache:
return fib_cache[n]
ret = memo_fib(n - 1) + memo_fib(n - 2)
fib_cache[n] = ret
return ret
def dp_fib(n):
partial_answers = [1, 1]
while len(partial_answers) <= n:
partial_answers.append(partial_answers[-1] + partial_answers[-2])
return partial_answers[n]
print memo_fib(5), dp_fib(5)
I personally find memoization much more natural. You can take a recursive function and memoize it by a mechanical process (first lookup answer in cache and return it if possible, otherwise compute it recursively and then before returning, you save the calculation in the cache for future use), whereas doing bottom up dynamic programming requires you to encode an order in which solutions are calculated, such that no "big problem" is computed before the smaller problem that it depends on.
A key feature of dynamic programming is the presence of overlapping subproblems. That is, the problem that you are trying to solve can be broken into subproblems, and many of those subproblems share subsubproblems. It is like "Divide and conquer", but you end up doing the same thing many, many times. An example that I have used since 2003 when teaching or explaining these matters: you can compute Fibonacci numbers recursively.
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
Use your favorite language and try running it for fib(50). It will take a very, very long time. Roughly as much time as fib(50) itself! However, a lot of unnecessary work is being done. fib(50) will call fib(49) and fib(48), but then both of those will end up calling fib(47), even though the value is the same. In fact, fib(47) will be computed three times: by a direct call from fib(49), by a direct call from fib(48), and also by a direct call from another fib(48), the one that was spawned by the computation of fib(49)... So you see, we have overlapping subproblems.
Great news: there is no need to compute the same value many times. Once you compute it once, cache the result, and the next time use the cached value! This is the essence of dynamic programming. You can call it "top-down", "memoization", or whatever else you want. This approach is very intuitive and very easy to implement. Just write a recursive solution first, test it on small tests, add memoization (caching of already computed values), and --- bingo! --- you are done.
Usually you can also write an equivalent iterative program that works from the bottom up, without recursion. In this case this would be the more natural approach: loop from 1 to 50 computing all the Fibonacci numbers as you go.
fib[0] = 0
fib[1] = 1
for i in range(48):
fib[i+2] = fib[i] + fib[i+1]
In any interesting scenario the bottom-up solution is usually more difficult to understand. However, once you do understand it, usually you'd get a much clearer big picture of how the algorithm works. In practice, when solving nontrivial problems, I recommend first writing the top-down approach and testing it on small examples. Then write the bottom-up solution and compare the two to make sure you are getting the same thing. Ideally, compare the two solutions automatically. Write a small routine that would generate lots of tests, ideally -- all small tests up to certain size --- and validate that both solutions give the same result. After that use the bottom-up solution in production, but keep the top-bottom code, commented out. This will make it easier for other developers to understand what it is that you are doing: bottom-up code can be quite incomprehensible, even you wrote it and even if you know exactly what you are doing.
In many applications the bottom-up approach is slightly faster because of the overhead of recursive calls. Stack overflow can also be an issue in certain problems, and note that this can very much depend on the input data. In some cases you may not be able to write a test causing a stack overflow if you don't understand dynamic programming well enough, but some day this may still happen.
Now, there are problems where the top-down approach is the only feasible solution because the problem space is so big that it is not possible to solve all subproblems. However, the "caching" still works in reasonable time because your input only needs a fraction of the subproblems to be solved --- but it is too tricky to explicitly define, which subproblems you need to solve, and hence to write a bottom-up solution. On the other hand, there are situations when you know you will need to solve all subproblems. In this case go on and use bottom-up.
I would personally use top-bottom for Paragraph optimization a.k.a the Word wrap optimization problem (look up the Knuth-Plass line-breaking algorithms; at least TeX uses it, and some software by Adobe Systems uses a similar approach). I would use bottom-up for the Fast Fourier Transform.
Lets take fibonacci series as an example
1,1,2,3,5,8,13,21....
first number: 1
Second number: 1
Third Number: 2
Another way to put it,
Bottom(first) number: 1
Top (Eighth) number on the given sequence: 21
In case of first five fibonacci number
Bottom(first) number :1
Top (fifth) number: 5
Now lets take a look of recursive Fibonacci series algorithm as an example
public int rcursive(int n) {
if ((n == 1) || (n == 2)) {
return 1;
} else {
return rcursive(n - 1) + rcursive(n - 2);
}
}
Now if we execute this program with following commands
rcursive(5);
if we closely look into the algorithm, in-order to generate fifth number it requires 3rd and 4th numbers. So my recursion actually start from top(5) and then goes all the way to bottom/lower numbers. This approach is actually top-down approach.
To avoid doing same calculation multiple times we use Dynamic Programming techniques. We store previously computed value and reuse it. This technique is called memoization. There are more to Dynamic programming other then memoization which is not needed to discuss current problem.
Top-Down
Lets rewrite our original algorithm and add memoized techniques.
public int memoized(int n, int[] memo) {
if (n <= 2) {
return 1;
} else if (memo[n] != -1) {
return memo[n];
} else {
memo[n] = memoized(n - 1, memo) + memoized(n - 2, memo);
}
return memo[n];
}
And we execute this method like following
int n = 5;
int[] memo = new int[n + 1];
Arrays.fill(memo, -1);
memoized(n, memo);
This solution is still top-down as algorithm start from top value and go to bottom each step to get our top value.
Bottom-Up
But, question is, can we start from bottom, like from first fibonacci number then walk our way to up. Lets rewrite it using this techniques,
public int dp(int n) {
int[] output = new int[n + 1];
output[1] = 1;
output[2] = 1;
for (int i = 3; i <= n; i++) {
output[i] = output[i - 1] + output[i - 2];
}
return output[n];
}
Now if we look into this algorithm it actually start from lower values then go to top. If i need 5th fibonacci number i am actually calculating 1st, then second then third all the way to up 5th number. This techniques actually called bottom-up techniques.
Last two, algorithms full-fill dynamic programming requirements. But one is top-down and another one is bottom-up. Both algorithm has similar space and time complexity.
Dynamic Programming is often called Memoization!
1.Memoization is the top-down technique(start solving the given problem by breaking it down) and dynamic programming is a bottom-up technique(start solving from the trivial sub-problem, up towards the given problem)
2.DP finds the solution by starting from the base case(s) and works its way upwards. DP solves all the sub-problems, because it does it bottom-up
Unlike Memoization, which solves only the needed sub-problems
DP has the potential to transform exponential-time brute-force solutions into polynomial-time algorithms.
DP may be much more efficient because its iterative
On the contrary, Memoization must pay for the (often significant) overhead due to recursion.
To be more simple, Memoization uses the top-down approach to solve the problem i.e. it begin with core(main) problem then breaks it into sub-problems and solve these sub-problems similarly. In this approach same sub-problem can occur multiple times and consume more CPU cycle, hence increase the time complexity. Whereas in Dynamic programming same sub-problem will not be solved multiple times but the prior result will be used to optimize the solution.
Dynamic programming problems can be solved using either bottom-up or top-down approaches.
Generally, the bottom-up approach uses the tabulation technique, while the top-down approach uses the recursion (with memorization) technique.
But you can also have bottom-up and top-down approaches using recursion as shown below.
Bottom-Up: Start with the base condition and pass the value calculated until now recursively. Generally, these are tail recursions.
int n = 5;
fibBottomUp(1, 1, 2, n);
private int fibBottomUp(int i, int j, int count, int n) {
if (count > n) return 1;
if (count == n) return i + j;
return fibBottomUp(j, i + j, count + 1, n);
}
Top-Down: Start with the final condition and recursively get the result of its sub-problems.
int n = 5;
fibTopDown(n);
private int fibTopDown(int n) {
if (n <= 1) return 1;
return fibTopDown(n - 1) + fibTopDown(n - 2);
}
Simply saying top down approach uses recursion for calling Sub problems again and again where as bottom up approach use the single without calling any one and hence it is more efficient.
Following is the DP based solution for Edit Distance problem which is top down. I hope it will also help in understanding the world of Dynamic Programming:
public int minDistance(String word1, String word2) {//Standard dynamic programming puzzle.
int m = word2.length();
int n = word1.length();
if(m == 0) // Cannot miss the corner cases !
return n;
if(n == 0)
return m;
int[][] DP = new int[n + 1][m + 1];
for(int j =1 ; j <= m; j++) {
DP[0][j] = j;
}
for(int i =1 ; i <= n; i++) {
DP[i][0] = i;
}
for(int i =1 ; i <= n; i++) {
for(int j =1 ; j <= m; j++) {
if(word1.charAt(i - 1) == word2.charAt(j - 1))
DP[i][j] = DP[i-1][j-1];
else
DP[i][j] = Math.min(Math.min(DP[i-1][j], DP[i][j-1]), DP[i-1][j-1]) + 1; // Main idea is this.
}
}
return DP[n][m];
}
You can think of its recursive implementation at your home. It's quite good and challenging if you haven't solved something like this before.
nothing to be confused about... you usually learn the language in bottom-up manner (from basics to more complicated things), and often make your project in top-down manner (from overall goal & structure of the code to certain pieces of implementations)
Related
Bellman Equation definition
I am trying to understand Bellman Equation and facing with some confusing moments. 1) In different sources I met different definitions of Bellman Equation. Sometimes it is defined as value-state function v(s) = R + y*V(s') Sometimes it is defined as action-state function q(s, a) = r + max(q(s', a')) Are both of these definitions correct? How Bellman equation was introduced in the original paper?
Bellman equation gives a definite form to dynamic programming solutions and using that we can generalise the solutions to optimisations problems which are recursive in nature and follow the optimal substructure property. Optimal substructure in simpler terms means that the given problem can be broken down into smaller sub problems which require the same solution with smaller data. If an optimal solution to the smaller problem can be computed then it means the given problem (larger one) can also be computed. Let's denote the problem solution for given state S by value V(S), S is the state or the subproblem. Let's denote the cost that would incur by choosing action a(i) at state S be R. R will be a function f(S, a(i)), where a is the set of all possible actions that can be performed on state S. V(S) = max{ f(S, a(i)) + y * V(S') } where max is taken by iterating over all possible i. y is a fixed constant that taxes the subproblem to bigger problem transition, for most problems y = 1, so you can ignore it for now. So basically at any given sub-problem S, V(S) will give us the most optimal solution by choosing all combinations of actions a(i) that can be performed and the next state that will be created with that action. If you think recursively and are habitual to such stuff then it's easy to see why the above equation is correct. I would suggest to solve dynamic programming problems and look at some standard problems and their solutions to get an idea how those problems are broken down into smaller similar problems and solved recursively. After that, the above equation will make more sense. Also, you will realise that two equations you have written above are almost the same thing, just they are written in a bit different manner. Here is a list of more commonly known DP problems and their solutions.
partial functions vs input verification
I really love using total functions. That said, sometimes I'm not sure what the best approach is for guaranteeing that. Lets say that I'm writing a function similar to chunksOf from the split package, where I want to split up a list into sublists of a given size. Now I'd really rather say that the input for sublist size needs to be a positive int (so excluding 0). As I see it I have several options: 1) all-out: make a newtype for PositiveInt, hide the constructor, and only expose safe functions for creating a PositiveInt (perhaps returning a Maybe or some union of Positive | Negative | Zero or what have you). This seems like it could be a huge hassle. 2) what the split package does: just return an infinite list of size-0 sublists if the size <= 0. This seems like you risk bugs not getting caught, and worse: those bugs just infinitely hanging your program with no indication of what went wrong. 3) what most other languages do: error when the input is <= 0. I really prefer total functions though... 4) return an Either or Maybe to cover the case that the input might have been <= 0. Similar to #1, it seems like using this could just be a hassle. This seems similar to this post, but this has more to do with error conditions than just being as precise about types as possible. I'm looking for thoughts on how to decide what the best approach for a case like this is. I'm probably most inclined towards doing #1, and just dealing with the added overhead, but I'm concerned that I'll be kicking myself down the road. Is this a decision that needs to be made on a case-by-case basis, or is there a general strategy that consistently works best?
Time Complexity for index and drop of first item in Data.Sequence
I was recently working on an implementation of calculating moving average from a stream of input, using Data.Sequence. I figured I could get the whole operation to be O(n) by using a deque. My first attempt was (in my opinion) a bit more straightforward to read, but not a true a deque. It looked like: let newsequence = (|>) sequence n ... let dropFrontTotal = fromIntegral (newtotal - index newsequence 0) let newsequence' = drop 1 newsequence. ... According to the hackage docs for Data.Sequence, index should take O(log(min(i,n-i))) while drop should also take O(log(min(i,n-i))). Here's my question: If I do drop 1 someSequence, doesn't this mean a time complexity of O(log(min(1, (length someSequence)))), which in this case means: O(log(1))? If so, isn't O(log(1)) effectively constant? I had the same question for index someSequence 0: shouldn't that operation end up being O(log(0))? Ultimately, I had enough doubts about my understanding that I resorted to using Criterion to benchmark the two implementations to prove that the index/drop version is slower (and the amount it's slower by grows with the input). The informal results on my machine can be seen at the linked gist. I still don't really understand how to calculate time complexity for these operations, though, and I would appreciate any clarification anyone can provide.
What you suggest looks correct to me. As a minor caveat remember that these are amortized complexity bounds, so a single operation could require more than constant time, but a long chain of operations will only require a constant times the number of the chain. If you use criterion to benchmark and "reset" the state at every computation, you might see non-constant time costs, because the "reset" is preventing the amortization. It really depends on how you perform the test. If you start from a sequence an perform a long chain of operations on that, it should be OK. If you repeat many times a single operation using the same operands, then it could be not OK. Further, I guess bounds such as O(log(...)) should actually be read as O(log(1 + ...)) -- you can't realistically have O(log(1)) = O(0) or, worse O(log(0))= O(-inf) as a complexity bound.
What is the difference between memoization and dynamic programming?
What is the difference between memoization and dynamic programming? I think dynamic programming is a subset of memoization. Is it right?
Relevant article on Programming.Guide: Dynamic programming vs memoization vs tabulation What is difference between memoization and dynamic programming? Memoization is a term describing an optimization technique where you cache previously computed results, and return the cached result when the same computation is needed again. Dynamic programming is a technique for solving problems of recursive nature, iteratively and is applicable when the computations of the subproblems overlap. Dynamic programming is typically implemented using tabulation, but can also be implemented using memoization. So as you can see, neither one is a "subset" of the other. A reasonable follow-up question is: What is the difference between tabulation (the typical dynamic programming technique) and memoization? When you solve a dynamic programming problem using tabulation you solve the problem "bottom up", i.e., by solving all related sub-problems first, typically by filling up an n-dimensional table. Based on the results in the table, the solution to the "top" / original problem is then computed. If you use memoization to solve the problem you do it by maintaining a map of already solved sub problems. You do it "top down" in the sense that you solve the "top" problem first (which typically recurses down to solve the sub-problems). A good slide from here (link is now dead, slide is still good though): If all subproblems must be solved at least once, a bottom-up dynamic-programming algorithm usually outperforms a top-down memoized algorithm by a constant factor No overhead for recursion and less overhead for maintaining table There are some problems for which the regular pattern of table accesses in the dynamic-programming algorithm can be exploited to reduce the time or space requirements even further If some subproblems in the subproblem space need not be solved at all, the memoized solution has the advantage of solving only those subproblems that are definitely required Additional resources: Wikipedia: Memoization, Dynamic Programming Related SO Q/A: Memoization or Tabulation approach for Dynamic programming
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again. http://www.geeksforgeeks.org/dynamic-programming-set-1/ Memoization is an easy method to track previously solved solutions (often implemented as a hash key value pair, as opposed to tabulation which is often based on arrays) so that they aren't recalculated when they are encountered again. It can be used in both bottom up or top down methods. See this discussion on memoization vs tabulation. So Dynamic programming is a method to solve certain classes of problems by solving recurrence relations/recursion and storing previously found solutions via either tabulation or memoization. Memoization is a method to keep track of solutions to previously solved problems and can be used with any function that has unique deterministic solutions for a given set of inputs.
Both Memoization and Dynamic Programming solves individual subproblem only once. Memoization uses recursion and works top-down, whereas Dynamic programming moves in opposite direction solving the problem bottom-up. Below is an interesting analogy - Top-down - First you say I will take over the world. How will you do that? You say I will take over Asia first. How will you do that? I will take over India first. I will become the Chief Minister of Delhi, etc. etc. Bottom-up - You say I will become the CM of Delhi. Then will take over India, then all other countries in Asia and finally I will take over the world.
Dynamic Programming is often called Memoization! Memoization is the top-down technique(start solving the given problem by breaking it down) and dynamic programming is a bottom-up technique(start solving from the trivial sub-problem, up towards the given problem) DP finds the solution by starting from the base case(s) and works its way upwards. DP solves all the sub-problems, because it does it bottom-up Unlike Memoization, which solves only the needed sub-problems DP has the potential to transform exponential-time brute-force solutions into polynomial-time algorithms. DP may be much more efficient because its iterative On the contrary, Memoization must pay for the (often significant) overhead due to recursion. To be more simple, Memoization uses the top-down approach to solve the problem i.e. it begin with core(main) problem then breaks it into sub-problems and solve these sub-problems similarly. In this approach same sub-problem can occur multiple times and consume more CPU cycle, hence increase the time complexity. Whereas in Dynamic programming same sub-problem will not be solved multiple times but the prior result will be used to optimize the solution.
(1) Memoization and DP, conceptually, is really the same thing. Because: consider the definition of DP: "overlapping subproblems" "and optimal substructure". Memoization fully possesses these 2. (2) Memoization is DP with the risk of stack overflow is the recursion is deep. DP bottom up does not have this risk. (3) Memoization needs a hash table. So additional space, and some lookup time. So to answer the question: -Conceptually, (1) means they are the same thing. -Taking (2) into account, if you really want, memoization is a subset of DP, in a sense that a problem solvable by memoization will be solvable by DP, but a problem solvable by DP might not be solvable by memoization (because it might stack overflow). -Taking (3) into account, they have minor differences in performance.
From wikipedia: Memoization In computing, memoization is an optimization technique used primarily to speed up computer programs by having function calls avoid repeating the calculation of results for previously-processed inputs. Dynamic Programming In mathematics and computer science, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. When breaking a problem into smaller/simpler subproblems, we often encounter the same subproblem more then once - so we use Memoization to save results of previous calculations so we don't need to repeat them. Dynamic programming often encounters situations where it makes sense to use memoization but You can use either technique without necessarily using the other.
I would like to go with an example; Problem: You are climbing a stair case. It takes n steps to reach to the top. Each time you can either climb 1 or 2 steps. In how many distinct ways can you climb to the top? Recursion with Memoization In this way we are pruning (a removal of excess material from a tree or shrub) recursion tree with the help of memo array and reducing the size of recursion tree upto nn. public class Solution { public int climbStairs(int n) { int memo[] = new int[n + 1]; return climb_Stairs(0, n, memo); } public int climb_Stairs(int i, int n, int memo[]) { if (i > n) { return 0; } if (i == n) { return 1; } if (memo[i] > 0) { return memo[i]; } memo[i] = climb_Stairs(i + 1, n, memo) + climb_Stairs(i + 2, n, memo); return memo[i]; } } Dynamic Programming As we can see this problem can be broken into subproblems, and it contains the optimal substructure property i.e. its optimal solution can be constructed efficiently from optimal solutions of its subproblems, we can use dynamic programming to solve this problem. public class Solution { public int climbStairs(int n) { if (n == 1) { return 1; } int[] dp = new int[n + 1]; dp[1] = 1; dp[2] = 2; for (int i = 3; i <= n; i++) { dp[i] = dp[i - 1] + dp[i - 2]; } return dp[n]; } } Examples take from https://leetcode.com/problems/climbing-stairs/
Just think of two ways, We break down the bigger problem into smaller sub problems - Top down approach. We start from smallest sub problem and reach the bigger problem - Bottom up approach. In Memoization we go with (1.) where we save each function call in a cache and call back from there. Its a bit expensive as it involves recursive calls. In Dynamic Programming we go with (2.) where we maintain a table, bottom up by solving subproblems using the data saved in the table, commonly referred as the dp-table. Note: Both are applicable to problems with Overlapping sub-problems. Memoization performs comparatively poor to DP due to the overheads involved during recursive function calls. The asymptotic time-complexity remains the same.
There're some similarities between dynamic programming (DP) and memoization and in most cases you can implement a dynamic programming process by memoization and vise versa. But they do have some differences and you should check them out when deciding which approach to use: Memoization is a top-down approach during which you decompose a big problem into smaller-size subproblems with the same properties and when the size is small enough you can easily solve it by bruteforcing. Dynamic Programming is a bottom-up approach during which you firstly calculate the answer of small cases and then use them to construct the answer of big cases. During coding, usually memoization is implemented by recursion while dynamic programming does calculation by iteration. So if you have carefully calculate the space and time complexity of your algorithm, using dynamic-programming-style implementation can offer you better performance. There do exist situations where using memoization has advantages. Dynamic programming needs to calculate every subproblem because it doesn't know which one will be useful in the future. But memoization only calculate the subproblems related to the original problem. Sometimes you may design a DP algorithm with theoretically tremendous amount of dp status. But by careful analyses you find that only an acceptable amount of them will be used. In this situation it's preferred to use memoization to avoid huge execution time.
In Dynamic Programming , No overhead for recursion, less overhead for maintaining the table. The regular pattern of the table accesses may be used to reduce time or space requirements. In Memorization, Some subproblems do not need to be solved.
Here is a sample of Memoization and DP from Fibonacci Number problem written in Java. Dynamic Programming here is not involving the recursion, as result faster and can calculate higher values because it is not limited by the execution stack. public class Solution { public static long fibonacciMemoization(int i) { return fibonacciMemoization(i, new long[i + 1]); } public static long fibonacciMemoization(int i, long[] memo) { if (i <= 1) { return 1; } if (memo[i] != 0) { return memo[i]; } long val = fibonacciMemoization(i - 1, memo) + fibonacciMemoization(i - 2, memo); memo[i] = val; return val; } public static long fibonacciDynamicPrograming(int i) { if (i <= 1) { return i; } long[] memo = new long[i + 1]; memo[0] = 1; memo[1] = 1; memo[2] = 2; for (int j = 3; j <= i; j++) { memo[j] = memo[j - 1] + memo[j - 2]; } return memo[i]; } public static void main(String[] args) { System.out.println("Fibonacci with Dynamic Programing"); System.out.println(fibonacciDynamicPrograming(10)); System.out.println(fibonacciDynamicPrograming(1_000_000)); System.out.println("Fibonacci with Memoization"); System.out.println(fibonacciMemoization(10)); System.out.println(fibonacciMemoization(1_000_000)); //stackoverflow exception } }
Dynamic Programming is an optimization over plain recursive algorithm which consider all the combination of the input to provide the most suitable answer. This approach has one drawback, its huge time complexity. It can be made more efficient by the use of memoization. It will store every output of a subproblem and will directly give an answer, whenever that algorithm tries to solve that subproblem again. This can make the algorithm have polynomial time complexity.
Why is the 'if' statement considered evil?
I just came from Simple Design and Testing Conference. In one of the session we were talking about evil keywords in programming languages. Corey Haines, who proposed the subject, was convinced that if statement is absolute evil. His alternative was to create functions with predicates. Can you please explain to me why if is evil. I understand that you can write very ugly code abusing if. But I don't believe that it's that bad.
The if statement is rarely considered as "evil" as goto or mutable global variables -- and even the latter are actually not universally and absolutely evil. I would suggest taking the claim as a bit hyperbolic. It also largely depends on your programming language and environment. In languages which support pattern matching, you will have great tools for replacing if at your disposal. But if you're programming a low-level microcontroller in C, replacing ifs with function pointers will be a step in the wrong direction. So, I will mostly consider replacing ifs in OOP programming, because in functional languages, if is not idiomatic anyway, while in purely procedural languages you don't have many other options to begin with. Nevertheless, conditional clauses sometimes result in code which is harder to manage. This does not only include the if statement, but even more commonly the switch statement, which usually includes more branches than a corresponding if would. There are cases where it's perfectly reasonable to use an if When you are writing utility methods, extensions or specific library functions, it's likely that you won't be able to avoid ifs (and you shouldn't). There isn't a better way to code this little function, nor make it more self-documented than it is: // this is a good "if" use-case int Min(int a, int b) { if (a < b) return a; else return b; } // or, if you prefer the ternary operator int Min(int a, int b) { return (a < b) ? a : b; } Branching over a "type code" is a code smell On the other hand, if you encounter code which tests for some sort of a type code, or tests if a variable is of a certain type, then this is most likely a good candidate for refactoring, namely replacing the conditional with polymorphism. The reason for this is that by allowing your callers to branch on a certain type code, you are creating a possibility to end up with numerous checks scattered all over your code, making extensions and maintenance much more complex. Polymorphism on the other hand allows you to bring this branching decision as closer to the root of your program as possible. Consider: // this is called branching on a "type code", // and screams for refactoring void RunVehicle(Vehicle vehicle) { // how the hell do I even test this? if (vehicle.Type == CAR) Drive(vehicle); else if (vehicle.Type == PLANE) Fly(vehicle); else Sail(vehicle); } By placing common but type-specific (i.e. class-specific) functionality into separate classes and exposing it through a virtual method (or an interface), you allow the internal parts of your program to delegate this decision to someone higher in the call hierarchy (potentially at a single place in code), allowing much easier testing (mocking), extensibility and maintenance: // adding a new vehicle is gonna be a piece of cake interface IVehicle { void Run(); } // your method now doesn't care about which vehicle // it got as a parameter void RunVehicle(IVehicle vehicle) { vehicle.Run(); } And you can now easily test if your RunVehicle method works as it should: // you can now create test (mock) implementations // since you're passing it as an interface var mock = new Mock<IVehicle>(); // run the client method something.RunVehicle(mock.Object); // check if Run() was invoked mock.Verify(m => m.Run(), Times.Once()); Patterns which only differ in their if conditions can be reused Regarding the argument about replacing if with a "predicate" in your question, Haines probably wanted to mention that sometimes similar patterns exist over your code, which differ only in their conditional expressions. Conditional expressions do emerge in conjunction with ifs, but the whole idea is to extract a repeating pattern into a separate method, leaving the expression as a parameter. This is what LINQ already does, usually resulting in cleaner code compared to an alternative foreach: Consider these two very similar methods: // average male age public double AverageMaleAge(List<Person> people) { double sum = 0.0; int count = 0; foreach (var person in people) { if (person.Gender == Gender.Male) { sum += person.Age; count++; } } return sum / count; // not checking for zero div. for simplicity } // average female age public double AverageFemaleAge(List<Person> people) { double sum = 0.0; int count = 0; foreach (var person in people) { if (person.Gender == Gender.Female) // <-- only the expression { // is different sum += person.Age; count++; } } return sum / count; } This indicates that you can extract the condition into a predicate, leaving you with a single method for these two cases (and many other future cases): // average age for all people matched by the predicate public double AverageAge(List<Person> people, Predicate<Person> match) { double sum = 0.0; int count = 0; foreach (var person in people) { if (match(person)) // <-- the decision to match { // is now delegated to callers sum += person.Age; count++; } } return sum / count; } var males = AverageAge(people, p => p.Gender == Gender.Male); var females = AverageAge(people, p => p.Gender == Gender.Female); And since LINQ already has a bunch of handy extension methods like this, you actually don't even need to write your own methods: // replace everything we've written above with these two lines var males = list.Where(p => p.Gender == Gender.Male).Average(p => p.Age); var females = list.Where(p => p.Gender == Gender.Female).Average(p => p.Age); In this last LINQ version the if statement has "disappeared" completely, although: to be honest the problem wasn't in the if by itself, but in the entire code pattern (simply because it was duplicated), and the if still actually exists, but it's written inside the LINQ Where extension method, which has been tested and closed for modification. Having less of your own code is always a good thing: less things to test, less things to go wrong, and the code is simpler to follow, analyze and maintain. Huge runs of nested if/else statements When you see a function spanning 1000 lines and having dozens of nested if blocks, there is an enormous chance it can be rewritten to use a better data structure and organize the input data in a more appropriate manner (e.g. a hashtable, which will map one input value to another in a single call), use a formula, a loop, or sometimes just an existing function which performs the same logic in 10 lines or less (e.g. this notorious example comes to my mind, but the general idea applies to other cases), use guard clauses to prevent nesting (guard clauses give more confidence into the state of variables throughout the function, because they get rid of exceptional cases as soon as possible), at least replace with a switch statement where appropriate. Refactor when you feel it's a code smell, but don't over-engineer Having said all this, you should not spend sleepless nights over having a couple of conditionals now and there. While these answers can provide some general rules of thumb, the best way to be able to detect constructs which need refactoring is through experience. Over time, some patterns emerge that result in modifying the same clauses over and over again.
There is another sense in which if can be evil: when it comes instead of polymorphism. E.g. if (animal.isFrog()) croak(animal) else if (animal.isDog()) bark(animal) else if (animal.isLion()) roar(animal) instead of animal.emitSound() But basically if is a perfectly acceptable tool for what it does. It can be abused and misused of course, but it is nowhere near the status of goto.
A good quote from Code Complete: Code as if whoever maintains your program is a violent psychopath who knows where you live. — Anonymous IOW, keep it simple. If the readability of your application will be enhanced by using a predicate in a particular area, use it. Otherwise, use the 'if' and move on.
I think it depends on what you're doing to be honest. If you have a simple if..else statement, why use a predicate? If you can, use a switch for larger if replacements, and then if the option to use a predicate for large operations (where it makes sense, otherwise your code will be a nightmare to maintain), use it. This guy seems to have been a bit pedantic for my liking. Replacing all if's with Predicates is just crazy talk.
There is the Anti-If campaign which started earlier in the year. The main premise being that many nested if statements often can often be replaced with polymorphism. I would be interested to see an example of using the Predicate instead. Is this more along the lines of functional programming?
Just like in the bible verse about money, if statements are not evil -- the LOVE of if statements is evil. A program without if statements is a ridiculous idea, and using them as necessary is essential. But a program that has 100 if-else if blocks in a row (which, sadly, I have seen) is definitely evil.
I have to say that I recently have begun to view if statements as a code smell: especially when you find yourself repeating the same condition several times. But there's something you need to understand about code smells: they don't necessarily mean that the code is bad. They just mean that there's a good chance the code is bad. For instance, comments are listed as a code smell by Martin Fowler, but I wouldn't take anyone seriously who says "comments are evil; don't use them". Generally though, I prefer to use polymorphism instead of if statements where possible. That just makes for so much less room for error. I tend to find that a lot of the time, using conditionals leads to a lot of tramp arguments as well (because you have to pass the data needed to form the conditional on to the appropriate method).
if is not evil(I also hold that assigning morality to code-writing practices is asinine...). Mr. Haines is being silly and should be laughed at.
I'll agree with you; he was wrong. You can go too far with things like that, too clever for your own good. Code created with predicates instead of ifs would be horrendous to maintain and test.
Predicates come from logical/declarative programming languages, like PROLOG. For certain classes of problems, like constraint solving, they are arguably superior to a lot of drawn out step-by-step if-this-do-that-then-do-this crap. Problems that would be long and complex to solve in imperative languages can be done in just a few lines in PROLOG. There's also the issue of scalable programming (due to the move towards multicore, the web, etc.). If statements and imperative programming in general tend to be in step-by-step order, and not scaleable. Logical declarations and lambda calculus though, describe how a problem can be solved, and what pieces it can be broken down into. As a result, the interpreter/processor executing that code can efficiently break the code into pieces, and distribute it across multiple CPUs/cores/threads/servers. Definitely not useful everywhere; I'd hate to try writing a device driver with predicates instead of if statements. But yes, I think the main point is probably sound, and worth at least getting familiar with, if not using all the time.
The only problem with a predicates (in terms of replacing if statements) is that you still need to test them: function void Test(Predicate<int> pr, int num) { if (pr(num)) { /* do something */ } else { /* do something else */ } } You could of course use the terniary operator (?:), but that's just an if statement in disguise...
Perhaps with quantum computing it will be a sensible strategy to not use IF statements but to let each leg of the computation proceed and only have the function 'collapse' at termination to a useful result.
Sometimes it's necessary to take an extreme position to make your point. I'm sure this person uses if -- but every time you use an if, it's worth having a little think about whether a different pattern would make the code clearer. Preferring polymorphism to if is at the core of this. Rather than: if(animaltype = bird) { squawk(); } else if(animaltype = dog) { bark(); } ... use: animal.makeSound(); But that supposes that you've got an Animal class/interface -- so really what the if is telling you, is that you need to create that interface. So in the real world, what sort of ifs do we see that lead us to a polymorphism solution? if(logging) { log.write("Did something"); } That's really irritating to see throughout your code. How about, instead, having two (or more) implementations of Logger? this.logger = new NullLogger(); // logger.log() does nothing this.logger = new StdOutLogger(); // logger.log() writes to stdout That leads us to the Strategy Pattern. Instead of: if(user.getCreditRisk() > 50) { decision = thoroughCreditCheck(); } else if(user.getCreditRisk() > 20) { decision = mediumCreditCheck(); } else { decision = cursoryCreditCheck(); } ... you could have ... decision = getCreditCheckStrategy(user.getCreditRisk()).decide(); Of course getCreditCheckStrategy() might contain an if -- and that might well be appropriate. You've pushed it into a neat place where it belongs.
It probably comes down to a desire to keep code cyclomatic complexity down, and to reduce the number of branch points in a function. If a function is simple to decompose into a number of smaller functions, each of which can be tested, you can reduce the complexity and make code more easily testable.
IMO: I suspect he was trying to provoke a debate and make people think about the misuse of 'if'. No one would seriously suggest such a fundamental construction of programming syntax was to be completely avoided would they?
Good that in ruby we have unless ;) But seriously probably if is the next goto, that even if most of the people think it is evil in some cases is simplifying/speeding up the things (and in some cases like low level highly optimized code it's a must).
I think If statements are evil, but If expressions are not. What I mean by an if expression in this case can be something like the C# ternary operator (condition ? trueExpression : falseExpression). This is not evil because it is a pure function (in a mathematical sense). It evaluates to a new value, but it has no effects on anything else. Because of this, it works in a substitution model. Imperative If statements are evil because they force you to create side-effects when you don't need to. For an If statement to be meaningful, you have to produce different "effects" depending on the condition expression. These effects can be things like IO, graphic rendering or database transactions, which change things outside of the program. Or, it could be assignment statements that mutate the state of the existing variables. It is usually better to minimize these effects and separate them from the actual logic. But, because of the If statements, we can freely add these "conditionally executed effects" everywhere in the code. I think that's bad.
If is not evil! Consider ... int sum(int a, int b) { return a + b; } Boring, eh? Now with an added if ... int sum(int a, int b) { if (a == 0 && b == 0) { return 0; } return a + b; } ... your code creation productivity (measured in LOC) is doubled. Also code readability has improved much, for now you can see in the blink of an eye what the result is when both argument are zero. You couldn't do that in the code above, could you? Moreover you supported the testteam for they now can push their code coverage test tools use up more to the limits. Furthermore the code now is better prepared for future enhancements. Let's guess, for example, the sum should be zero if one of the arguments is zero (don't laugh and don't blame me, silly customer requirements, you know, and the customer is always right). Because of the if in the first place only a slight code change is needed. int sum(int a, int b) { if (a == 0 || b == 0) { return 0; } return a + b; } How much more code change would have been needed if you hadn't invented the if right from the start. Thankfulness will be yours on all sides. Conclusion: There's never enough if's. There you go. To.