Fibonacci memoization order of execution - dynamic-programming

The following code is fibonacci sequence using memoization. But I do not understand the order of execution in the algorithm. If we do dynamic_fib(4), it will calculate dynamic_fib(3) + dynamic_fib(2). And left side calls first, then it calculates dynamic_fib(2) + dynamic_fib(1). But while calculating dynamic_fib(3), how does the cached answer of dynamic_fib(2) propagate up to be reused when we are not saving the result to the memory address of the dictionary like &dic[n] in C.
What I think should happen is, the answer for dynamic_fib(2) is gone because it only existed in that stack. So you have to calculate dynamic_fib(2) again when calculating dynamic_fib(4)
Am I missing something?
def dynamic_fib(n):
return fibonacci(n, {})
def fibonacci(n, dic):
if n == 0 or n == 1:
return n
if not dic.get(n, False):
dic[n] = fibonacci(n-1, dic) + fibonacci(n-2, dic)
return dic[n]

The function dynamic_fib (called once) just delegates the work to fibonacci, where the real work is done. In fibonacci you have the dictionary dic which is used to save the values of the function once it is calculated. So, for each of the values (2-n) when you call the function fibonacci for the first time, it calculates the result, but it also stores it in the dictionary, so that when the next time we ask for it, we alread have it, and we don't need to travel the whole tree again. So the complexity is linear , O(n).

Related

What's the Big-O-notation for this algorithm for printing the prime numbers?

I am trying to figure out the time complexity of the below problem.
import math
def prime(n):
for i in range(2,n+1):
for j in range(2, int(math.sqrt(i))+1):
if i%j == 0:
break
else:
print(i)
prime(36)
This problem prints the prime numbers until 36.
My understanding of the above program:
for every n the inner loop runs for sqrt(n) times so on until n.
so the Big-o-Notation is O(n sqrt(n)).
Does my understanding is right? Please correct me if I am wrong...
Time complexity measures the increase in number or steps (basic operations) as the input scales up:
O(1) : constant (hash look-up)
O(log n) : logarithmic in base 2 (binary search)
O(n) : linear (search for an element in unsorted list)
O(n^2) : quadratic (bubble sort)
To determine the exact complexity of an algorithm requires a lot of math and algorithms knowledge. You can find a detailed description of them here: time complexity
Also keep in mind that these values are considered for very large values of n, so as a rule of thumb, whenever you see nested for loops, think O(n^2).
You can add a steps counter inside your inner for loop and record its value for different values of n, then print the relation in a graph. Then you can compare your graph with the graphs of n, log n, n * sqrt(n) and n^2 to determine exactly where your algorithm is placed.

Time complexity of the python3 code below:

if val in my_list:
return my_list.index(val)
'in' operator has an average complexity of O(n). index() has a complexity of O(n) at worst case. Then is the complexity of these two lines of code exponential i.e. O(n^2) ? or O(n) ?
Assumed that List is replaced with a valid variable name, it should be O(n) (as Nathaniel mentioned). The in operation runs on average n/2 times and in some cases the index operation runs again on average n/2 times. -> O(n) + O(n) = O(n)
Why don't you use a for loop over the indexes themselves?
Generally speaking, if you have two O(n) operations, it will only become O(n^2) if the latter operation happens each time the former operation runs. In this case, the if is a branch - either you go down the True evaluation branch or the False evaluation branch.
Therefore, you have either:
if val in my_list -> evaluates to false, takes O(n) to check each element
End, because there is nothing else to do here. Total of O(n).
Or
if val in my_list -> e evaluates to true, takes O(n) to check each element
my_list.index(val) -> find the index, takes O(n) to check each element
End. Total is O(n) plus O(n)
Compare this to:
for i in my_list:
if i % 2 == 0:
print("Index of even number is {}", my_list.index(i))
Here we are iterating through the list, and on each element we might re-iterate through the whole list. This would be O(n^2). (Facilely; in actuality this is O(n log n) because we know that index() will never go past the current index. It's kind of hard to contrive an example where this actually reaches exponential using index.)

Valid Sudoku: How to decrease runtime

Problem is to check whether the given 2D array represents a valid Sudoku or not. Given below are the conditions required
Each row must contain the digits 1-9 without repetition.
Each column must contain the digits 1-9 without repetition.
Each of the 9 3x3 sub-boxes of the grid must contain the digits 1-9 without repetition.
Here is the code I prepared for this, please give me tips on how I can make it faster and reduce runtime and whether by using the dictionary my program is slowing down ?
def isValidSudoku(self, boards: List[List[str]]) -> bool:
r = {}
a = {}
for i in range(len(boards)):
c = {}
for j in range(len(boards[i])):
if boards[i][j] != '.':
x,y = r.get(boards[i][j]+f'{j}',0),c.get(boards[i][j],0)
u,v = (i+3)//3,(j+3)//3
z = a.get(boards[i][j]+f'{u}{v}',0)
if (x==0 and y==0 and z==0):
r[boards[i][j]+f'{j}'] = x+1
c[boards[i][j]] = y+1
a[boards[i][j]+f'{u}{v}'] = z+1
else:
return False
return True
Simply optimizing assignment without rethinking your algorithm limits your overall efficiency by a lot. When you make a choice you generally take a long time before discovering a contradiction.
Instead of representing, "Here are the values that I have figured out", try to represent, "Here are the values that I have left to try in each spot." And now your fundamental operation is, "Eliminate this value from this spot." (Remember, getting it down to 1 propagates to eliminating the value from all of its peers, potentially recursively.)
Assignment is now "Eliminate all values but this one from this spot."
And now your fundamental search operation is, "Find the square with the least number of remaining possibilities > 1. Try each possibility in turn."
This may feel heavyweight. But the immediate propagation of constraints results in very quickly discovering constraints on the rest of the solution, which is far faster than having to do exponential amounts of reasoning before finding the logical contradiction in your partial solution so far.
I recommend doing this yourself. But https://norvig.com/sudoku.html has full working code that you can look at at need.

Time complexity of multiply(a, b) function

What is the Time Complexity/Order of Growth for the function below?
def multiply(a, b):
'''Takes two integers and computes their product.'''
res = 0
for i in range(1, b+1):
res += a
return res
I know the size of b makes it linear, however what about the size of a?
Thanks!
Size of 'a' doesn't affect the time complexity of your algorithm. Since you are performing addition for b number of times, your complexity would be in the order of 'b'.
Being O(b) is a mathematical property of the function and not the exact characterization of it. The exact running time might be 2045*b + 3542 where the constants 2045 and 3542(which are just stated as an example for here) will depend on the input and size of the input which refers to the size of variable 'a' in this problem
Hence the size of a affects the running time of your code and not the time complexity of the code.

Dynamic Programming: 0/1 Knapsack - Retrieving Combinations as array

I've been studying Dynamic Programming from both a bottom-up iterative approach and a top-down recursive approach using memoization.
I've been tasked with solving the 0/1 Knapsack Problem and have successfully used the bottom-up approach but am unable to use the top-down approach.
Using information from a webpage(http://www.csl.mtu.edu/cs4321/www/Lectures/Lecture%2017%20-%20Knapsack%20Problem%20and%20Memory%20Function.htm) I have come up with the following pseudocode which successfully computes the Value of the optimal solution. My issue is I cannot think of a way to keep track of the correct combination of the items which constitute this solution.
// values array containing the "profits" of each item
// weights array containing the "weight" of each item
// memo_pad is a list used to memoize recursive results
values[], weights[], memo_pad[]
knapsack_memoized(i, w):
// i is the current item
// w is the remaining weight allowed in the knapsack
if memo_pad[i][w] < 0: // if value not memoized
if w < weights[i]:
memo_pad[i][w] = knapsack_memoized(i-1, w)
else:
memo_pad[i][w] = max{knapsack_memoized(i-1,w), values[i]+knapsack_memoized(i-1, w-weights[i])}
return memo_pad[i][w]
end
I cannot figure out how to find out what combination of the input items will give me the returned optimised value?
you are looking to return the maximum of two cases:
(1) nth item included
(2) item not included
try this...
else:
memo_pad[i][w] = max{values[i] + memo_pad[i-1][w-weights[i]],
memo_pad[i-1][w]}

Resources