Complexity of Fibonacci Fast Recursive Program - python-3.x

def fastfib(n, fib_dict = {0: 1, 1: 1}):
if n not in fib_dict:
fib_dict[n] = fastfib(n-1, fib_dict) + fastfib(n-2, fib_dict)
return fib_dict[n]
I think the complexity here is n^2, but I am not sure.

Since you are filling a dictionary with n values, the lower bound is O(n). However, since you're only doing constant-time operations for each n, (Pythons dictionary lookup operation is O(1), though amortized), this algorithm should be O(n) (amortized). This technique of saving already computed values in a table is called memoization.

Related

What's the Big-O-notation for this algorithm for printing the prime numbers?

I am trying to figure out the time complexity of the below problem.
import math
def prime(n):
for i in range(2,n+1):
for j in range(2, int(math.sqrt(i))+1):
if i%j == 0:
break
else:
print(i)
prime(36)
This problem prints the prime numbers until 36.
My understanding of the above program:
for every n the inner loop runs for sqrt(n) times so on until n.
so the Big-o-Notation is O(n sqrt(n)).
Does my understanding is right? Please correct me if I am wrong...
Time complexity measures the increase in number or steps (basic operations) as the input scales up:
O(1) : constant (hash look-up)
O(log n) : logarithmic in base 2 (binary search)
O(n) : linear (search for an element in unsorted list)
O(n^2) : quadratic (bubble sort)
To determine the exact complexity of an algorithm requires a lot of math and algorithms knowledge. You can find a detailed description of them here: time complexity
Also keep in mind that these values are considered for very large values of n, so as a rule of thumb, whenever you see nested for loops, think O(n^2).
You can add a steps counter inside your inner for loop and record its value for different values of n, then print the relation in a graph. Then you can compare your graph with the graphs of n, log n, n * sqrt(n) and n^2 to determine exactly where your algorithm is placed.

Time complexity of the python3 code below:

if val in my_list:
return my_list.index(val)
'in' operator has an average complexity of O(n). index() has a complexity of O(n) at worst case. Then is the complexity of these two lines of code exponential i.e. O(n^2) ? or O(n) ?
Assumed that List is replaced with a valid variable name, it should be O(n) (as Nathaniel mentioned). The in operation runs on average n/2 times and in some cases the index operation runs again on average n/2 times. -> O(n) + O(n) = O(n)
Why don't you use a for loop over the indexes themselves?
Generally speaking, if you have two O(n) operations, it will only become O(n^2) if the latter operation happens each time the former operation runs. In this case, the if is a branch - either you go down the True evaluation branch or the False evaluation branch.
Therefore, you have either:
if val in my_list -> evaluates to false, takes O(n) to check each element
End, because there is nothing else to do here. Total of O(n).
Or
if val in my_list -> e evaluates to true, takes O(n) to check each element
my_list.index(val) -> find the index, takes O(n) to check each element
End. Total is O(n) plus O(n)
Compare this to:
for i in my_list:
if i % 2 == 0:
print("Index of even number is {}", my_list.index(i))
Here we are iterating through the list, and on each element we might re-iterate through the whole list. This would be O(n^2). (Facilely; in actuality this is O(n log n) because we know that index() will never go past the current index. It's kind of hard to contrive an example where this actually reaches exponential using index.)

Cant we use this selection sort algorithm for better time complexity?

i am new in this programming word. I was reading about the selection sort algorithm. I saw the examples. I saw the python example in GeeksforGeeks( https://www.geeksforgeeks.org/selection-sort/ ) website. I thought that i can write more efficient code which time complexity is less than O(n^2).
arr = [64, 25, 12, 22, 11]
for i in range(len(arr)):
smallest_number = min(arr[i:])
smallest_number_index = arr.index(smallest_number, i)
arr[i] , arr[smallest_number_index] = arr[smallest_number_index], arr[i]
print(arr)
in this code the time complexity is O(1).
so can i use this code instead of geeksforgeeks code???
I think you misunderstood complexity calculation as well as you assumed that standard methods calls ( min and index) are O(1) operations. They are not, they are O(n) operations.
Let me write complexity of each line of your code in front of line.
for i in range(len(arr)): -> O(n)
smallest_number = min(arr[i:]) -> O(n)
smallest_number_index = arr.index(smallest_number, i) -> O(n)
arr[i] , arr[smallest_number_index] = arr[smallest_number_index], arr[i] -> O(1)
To calculate complexity of one loop iteration sum complexities of all lines under loop : O(n) + O(n) + O(1) = 2*O(n) +1 which is equivalent to O(n)
and now as you are running loop n times, you have to multiply it by complexity of one loop iteration i.e O(n)
so total complexity : n * O(n) = O(n^2)

Big-O analysis of permutation algorithm

result = False
def permute(a,l,r,b):
global result
if l==r:
if a==b:
result = True
else:
for i in range(l, r+1):
a[l], a[i] = a[i], a[l]
permute(a, l+1, r, b)
a[l], a[i] = a[i], a[l]
string1 = list("abc")
string2 = list("ggg")
permute(string1, 0, len(string1)-1, string2)
So basically I think that finding each permutation takes n^2 steps (times some constant) and to find all permutations should take n! steps. So does this make it O(n^2 * n!) ? and if so does the n! take over, making it just O(n!)?
Thanks
edit: this algorithm might seem weird for just finding permutations, and that is because i'm also using it to test for anagrams between the two strings. I just haven't renamed the method yet sorry
Finding each permutation doesn't take O(N^2). Creating each permutation happens in O(1) time. While it is tempting to say that this O(N) because you assign a new element to each index N times per permutation, each permutation shares assignments with other permutations.
When we do:
a[l], a[i] = a[i], a[l]
permute(a, l+1, r, b)
All subsequent recursive calls of permute down the line have this assignment already in place.
In reality, assignments only happen each time permute is called, which is times. We can then determine the time complexity to build each permutation using some limit calculus. We take the number of assignments over the total number of permutations as N approaches infinity.
We have:
Expanding the sigma:
The limit of the sum is the sum of the limits:
At this point we evaluate our limits and all of the terms except the first collapse to zero. Since our result is a constant, we get that our complexity per permutation is O(1).
However, we're forgetting about this part:
if l==r:
if a==b:
result = True
The comparison of a == b (between two lists) occurs in O(N). Building each permutation takes O(1), but our comparison at the end, which occurs for each permutation, actually takes O(N). This gives us a time complexity of O(N) per permutation.
This gives you N! permutations times O(N) for each permutation giving you a total time complexity of O(N!) * O(N) = O(N * N!).
Your final time complexity doesn't reduce to O(N!), since O(N * N!) is still an order of magnitude greater than O(N!), and only constant terms get dropped (same reason why O(NlogN) != O(N)).

Fibonacci memoization order of execution

The following code is fibonacci sequence using memoization. But I do not understand the order of execution in the algorithm. If we do dynamic_fib(4), it will calculate dynamic_fib(3) + dynamic_fib(2). And left side calls first, then it calculates dynamic_fib(2) + dynamic_fib(1). But while calculating dynamic_fib(3), how does the cached answer of dynamic_fib(2) propagate up to be reused when we are not saving the result to the memory address of the dictionary like &dic[n] in C.
What I think should happen is, the answer for dynamic_fib(2) is gone because it only existed in that stack. So you have to calculate dynamic_fib(2) again when calculating dynamic_fib(4)
Am I missing something?
def dynamic_fib(n):
return fibonacci(n, {})
def fibonacci(n, dic):
if n == 0 or n == 1:
return n
if not dic.get(n, False):
dic[n] = fibonacci(n-1, dic) + fibonacci(n-2, dic)
return dic[n]
The function dynamic_fib (called once) just delegates the work to fibonacci, where the real work is done. In fibonacci you have the dictionary dic which is used to save the values of the function once it is calculated. So, for each of the values (2-n) when you call the function fibonacci for the first time, it calculates the result, but it also stores it in the dictionary, so that when the next time we ask for it, we alread have it, and we don't need to travel the whole tree again. So the complexity is linear , O(n).

Resources