Big-O analysis of permutation algorithm - python-3.x

result = False
def permute(a,l,r,b):
global result
if l==r:
if a==b:
result = True
else:
for i in range(l, r+1):
a[l], a[i] = a[i], a[l]
permute(a, l+1, r, b)
a[l], a[i] = a[i], a[l]
string1 = list("abc")
string2 = list("ggg")
permute(string1, 0, len(string1)-1, string2)
So basically I think that finding each permutation takes n^2 steps (times some constant) and to find all permutations should take n! steps. So does this make it O(n^2 * n!) ? and if so does the n! take over, making it just O(n!)?
Thanks
edit: this algorithm might seem weird for just finding permutations, and that is because i'm also using it to test for anagrams between the two strings. I just haven't renamed the method yet sorry

Finding each permutation doesn't take O(N^2). Creating each permutation happens in O(1) time. While it is tempting to say that this O(N) because you assign a new element to each index N times per permutation, each permutation shares assignments with other permutations.
When we do:
a[l], a[i] = a[i], a[l]
permute(a, l+1, r, b)
All subsequent recursive calls of permute down the line have this assignment already in place.
In reality, assignments only happen each time permute is called, which is times. We can then determine the time complexity to build each permutation using some limit calculus. We take the number of assignments over the total number of permutations as N approaches infinity.
We have:
Expanding the sigma:
The limit of the sum is the sum of the limits:
At this point we evaluate our limits and all of the terms except the first collapse to zero. Since our result is a constant, we get that our complexity per permutation is O(1).
However, we're forgetting about this part:
if l==r:
if a==b:
result = True
The comparison of a == b (between two lists) occurs in O(N). Building each permutation takes O(1), but our comparison at the end, which occurs for each permutation, actually takes O(N). This gives us a time complexity of O(N) per permutation.
This gives you N! permutations times O(N) for each permutation giving you a total time complexity of O(N!) * O(N) = O(N * N!).
Your final time complexity doesn't reduce to O(N!), since O(N * N!) is still an order of magnitude greater than O(N!), and only constant terms get dropped (same reason why O(NlogN) != O(N)).

Related

What's the Big-O-notation for this algorithm for printing the prime numbers?

I am trying to figure out the time complexity of the below problem.
import math
def prime(n):
for i in range(2,n+1):
for j in range(2, int(math.sqrt(i))+1):
if i%j == 0:
break
else:
print(i)
prime(36)
This problem prints the prime numbers until 36.
My understanding of the above program:
for every n the inner loop runs for sqrt(n) times so on until n.
so the Big-o-Notation is O(n sqrt(n)).
Does my understanding is right? Please correct me if I am wrong...
Time complexity measures the increase in number or steps (basic operations) as the input scales up:
O(1) : constant (hash look-up)
O(log n) : logarithmic in base 2 (binary search)
O(n) : linear (search for an element in unsorted list)
O(n^2) : quadratic (bubble sort)
To determine the exact complexity of an algorithm requires a lot of math and algorithms knowledge. You can find a detailed description of them here: time complexity
Also keep in mind that these values are considered for very large values of n, so as a rule of thumb, whenever you see nested for loops, think O(n^2).
You can add a steps counter inside your inner for loop and record its value for different values of n, then print the relation in a graph. Then you can compare your graph with the graphs of n, log n, n * sqrt(n) and n^2 to determine exactly where your algorithm is placed.

How can I find Time complexity for "while loop" and "power statement"

I'm trying to find the time complexity of two python statements:
while loops: I understand how to find the complexity class of for loops, but when it comes to while loops the case is completely different how can I stop here? the condition controls the loop...
power statement: is the time complexity affected by the pow function?
Here is an example of a program statement:
Upper=100
n =random.randrange(0, Upper)
while gcd(n, Upper) != 1:
n = random. randrange(0, Upper) # question 1
pow(c, n - 1, n) # (c ^n-1 mod n) question 2
#where n is a large prime number
The time complexity of the while loop is the number of iterations times the complexities of the loop test plus loop body. (More generally, in case these complexities vary from iteration to iteration, you must consider the sum of the complexities across iterations.)
For the pow function, there is no single answer. Using a fixed-length representation, you may assume O(1) complexity, though is some contexts it could be O(log e) where e is the exponent.

Time complexity of the python3 code below:

if val in my_list:
return my_list.index(val)
'in' operator has an average complexity of O(n). index() has a complexity of O(n) at worst case. Then is the complexity of these two lines of code exponential i.e. O(n^2) ? or O(n) ?
Assumed that List is replaced with a valid variable name, it should be O(n) (as Nathaniel mentioned). The in operation runs on average n/2 times and in some cases the index operation runs again on average n/2 times. -> O(n) + O(n) = O(n)
Why don't you use a for loop over the indexes themselves?
Generally speaking, if you have two O(n) operations, it will only become O(n^2) if the latter operation happens each time the former operation runs. In this case, the if is a branch - either you go down the True evaluation branch or the False evaluation branch.
Therefore, you have either:
if val in my_list -> evaluates to false, takes O(n) to check each element
End, because there is nothing else to do here. Total of O(n).
Or
if val in my_list -> e evaluates to true, takes O(n) to check each element
my_list.index(val) -> find the index, takes O(n) to check each element
End. Total is O(n) plus O(n)
Compare this to:
for i in my_list:
if i % 2 == 0:
print("Index of even number is {}", my_list.index(i))
Here we are iterating through the list, and on each element we might re-iterate through the whole list. This would be O(n^2). (Facilely; in actuality this is O(n log n) because we know that index() will never go past the current index. It's kind of hard to contrive an example where this actually reaches exponential using index.)

What is the time complexity of this agorithm (that solves leetcode question 650) (question 2)?

Hello I have been working on https://leetcode.com/problems/2-keys-keyboard/ and came upon this dynamic programming question.
You start with an 'A' on a blank page and you get a number n when you are done you should have n times 'A' on the page. The catch is you are allowed only 2 operations copy (and you can only copy the total amount of A's currently on the page) and paste --> find the minimum number of operations to get n 'A' on the page.
I solved this problem but then found a better solution in the discussion section of leetcode --> and I can't figure out it's time complexity.
def minSteps(self, n):
factors = 0
i=2
while i <= n:
while n % i == 0:
factors += i
n /= i
i+=1
return factors
The way this works is i is never gonna be bigger than the biggest prime factor p of n so the outer loop is O(p) and the inner while loop is basically O(logn) since we are dividing n /= i at each iteration.
But the way I look at it we are doing O(logn) divisions in total for the inner loop while the outer loop is O(p) so using aggregate analysis this function is basically O(max(p, logn)) is this correct ?
Any help is welcome.
Your reasoning is correct: O(max(p, logn)) gives the time complexity, assuming that arithmetic operations take constant time. This assumption is not true for arbitrary large n, that would not fit in the machine's fixed-size number storage, and where you would need Big-Integer operations that have non-constant time complexity. But I will ignore that.
It is still odd to express the complexity in terms of p when that is not the input (but derived from it). Your input is only n, so it makes sense to express the complexity in terms of n alone.
Worst Case
Clearly, when n is prime, the algorithm is O(n) -- the inner loop never iterates.
For a prime n, the algorithm will take more time than for n+1, as even the smallest factor of n+1 (i.e. 2), will halve the number of iterations of the outer loop, and yet only add 1 block of constant work in the inner loop.
So O(n) is the worst case.
Average Case
For the average case, we note that the division of n happens just as many times as n has prime factors (counting duplicates). For example, for n = 12, we have 3 divisions, as n = 2·2·3
The average number of prime factors for 1 < n < x approaches loglogn + B, where B is some constant. So we could say the average time complexity for the total execution of the inner loop is O(loglogn).
We need to add to that the execution of the outer loop. This corresponds to the average greatest prime factor. For 1 < n < x this average approaches C.n/logn, and so we have:
O(n/logn + loglogn)
Now n/logn is the more important term here, so this simplifies to:
O(n/logn)

What would be the big O notation for the function?

I know that big O notation is a measure of how efficint a function is but I don\t really get how to get calculate it.
def method(n)
sum = 0
for i in range(85)
sum += i * n
return sum
Would the answer be O(f(85)) ?
The complexity of this function is O(1)
in the RAM model basic mathematical functions occur in constant time. The dominate term in this function is
for i in range(85):
since 85 is a constant the complexity is represented by O(1)
you have function with 4 "actions", to calculate its big O we need to calculate big O for each action and select max:
sum = 0 - constant time, measured O(1)
for i in range(85) - constant time, 85 iterations, O(1 * complexity of #3)
sum += i*n - we can say constant time, but multiplication is actually depends on bit length of i and n, so we can either say O(1), or O(max(lenI, lenN))
return sum - constant time, measured O(1)
so, the possible max big O is #2, which is the 1 * O(#3), as soon as lenI and lenN are constant (32 or 64 bits usually), max(lenI, lenN) -> 32/64, so total complexity of your function is O(1 * 1) = O(1)
if we have big math, ie bit length of N can be very very long, then we can say O(bit length N)
NOTE: bit length N is actually log2(N)
In theory, the complexity is O(log n). As n grows, reading the number and performing the multiplication takes longer.
However, in practice, the value of n is constrained (there's a maximum value) and thus it can be read and operations can be performed on it in O(1) time. Since we repeat an O(1) operation a fixed amount of times, the complexity is still O(1).
Note that O(1) means constant time - O(85) doesn't really mean anything different. If you perform multiple constant time operations in a sequence, the result is still O(1) unless the length of the sequence depends on the size of the input. Doing a O(1) operation 1000 times is still O(1), but doing it n times is O(n).
If you want to really play it safe, just say O(∞), that's definitely a correct answer. CS teachers tend to not really appreciate it in practice though.
When talking about complexity, there always should be said what operations should be considered as constant time ones (the initial agreement). Here the integer multiplication can be considered or constant or not. Anyway, the time complexity of the example is better than O(n). But it is the teacher's trick against the students -- kind of. :)

Resources