Nested constant size loop time complexity O(n) or O(n^2)? - nested

I know if I have a nested loop depending on N elements, the time complexity for it would be O(N^2).
In the case I have a constant size loop, nested in a N loop, like this:
while (i < N)
while (j < 1000){
// code //
j++;
}
i++;
}
what would be the time complexity here? Its not O(N^2) but I dont know if its O(N) or something in between. It doesnt seem linear to me.
T

The constant size loop takes constant time (O(1)), so the entire loop indeed has linear time complexity, O(N).

Related

How can I find Time complexity for "while loop" and "power statement"

I'm trying to find the time complexity of two python statements:
while loops: I understand how to find the complexity class of for loops, but when it comes to while loops the case is completely different how can I stop here? the condition controls the loop...
power statement: is the time complexity affected by the pow function?
Here is an example of a program statement:
Upper=100
n =random.randrange(0, Upper)
while gcd(n, Upper) != 1:
n = random. randrange(0, Upper) # question 1
pow(c, n - 1, n) # (c ^n-1 mod n) question 2
#where n is a large prime number
The time complexity of the while loop is the number of iterations times the complexities of the loop test plus loop body. (More generally, in case these complexities vary from iteration to iteration, you must consider the sum of the complexities across iterations.)
For the pow function, there is no single answer. Using a fixed-length representation, you may assume O(1) complexity, though is some contexts it could be O(log e) where e is the exponent.

Cant we use this selection sort algorithm for better time complexity?

i am new in this programming word. I was reading about the selection sort algorithm. I saw the examples. I saw the python example in GeeksforGeeks( https://www.geeksforgeeks.org/selection-sort/ ) website. I thought that i can write more efficient code which time complexity is less than O(n^2).
arr = [64, 25, 12, 22, 11]
for i in range(len(arr)):
smallest_number = min(arr[i:])
smallest_number_index = arr.index(smallest_number, i)
arr[i] , arr[smallest_number_index] = arr[smallest_number_index], arr[i]
print(arr)
in this code the time complexity is O(1).
so can i use this code instead of geeksforgeeks code???
I think you misunderstood complexity calculation as well as you assumed that standard methods calls ( min and index) are O(1) operations. They are not, they are O(n) operations.
Let me write complexity of each line of your code in front of line.
for i in range(len(arr)): -> O(n)
smallest_number = min(arr[i:]) -> O(n)
smallest_number_index = arr.index(smallest_number, i) -> O(n)
arr[i] , arr[smallest_number_index] = arr[smallest_number_index], arr[i] -> O(1)
To calculate complexity of one loop iteration sum complexities of all lines under loop : O(n) + O(n) + O(1) = 2*O(n) +1 which is equivalent to O(n)
and now as you are running loop n times, you have to multiply it by complexity of one loop iteration i.e O(n)
so total complexity : n * O(n) = O(n^2)

Big-O analysis of permutation algorithm

result = False
def permute(a,l,r,b):
global result
if l==r:
if a==b:
result = True
else:
for i in range(l, r+1):
a[l], a[i] = a[i], a[l]
permute(a, l+1, r, b)
a[l], a[i] = a[i], a[l]
string1 = list("abc")
string2 = list("ggg")
permute(string1, 0, len(string1)-1, string2)
So basically I think that finding each permutation takes n^2 steps (times some constant) and to find all permutations should take n! steps. So does this make it O(n^2 * n!) ? and if so does the n! take over, making it just O(n!)?
Thanks
edit: this algorithm might seem weird for just finding permutations, and that is because i'm also using it to test for anagrams between the two strings. I just haven't renamed the method yet sorry
Finding each permutation doesn't take O(N^2). Creating each permutation happens in O(1) time. While it is tempting to say that this O(N) because you assign a new element to each index N times per permutation, each permutation shares assignments with other permutations.
When we do:
a[l], a[i] = a[i], a[l]
permute(a, l+1, r, b)
All subsequent recursive calls of permute down the line have this assignment already in place.
In reality, assignments only happen each time permute is called, which is times. We can then determine the time complexity to build each permutation using some limit calculus. We take the number of assignments over the total number of permutations as N approaches infinity.
We have:
Expanding the sigma:
The limit of the sum is the sum of the limits:
At this point we evaluate our limits and all of the terms except the first collapse to zero. Since our result is a constant, we get that our complexity per permutation is O(1).
However, we're forgetting about this part:
if l==r:
if a==b:
result = True
The comparison of a == b (between two lists) occurs in O(N). Building each permutation takes O(1), but our comparison at the end, which occurs for each permutation, actually takes O(N). This gives us a time complexity of O(N) per permutation.
This gives you N! permutations times O(N) for each permutation giving you a total time complexity of O(N!) * O(N) = O(N * N!).
Your final time complexity doesn't reduce to O(N!), since O(N * N!) is still an order of magnitude greater than O(N!), and only constant terms get dropped (same reason why O(NlogN) != O(N)).

Time Complexity of Dependent Nested Loop

Hi I've been trying to understand what the time complexity of this nested loop will be for a while now.
int i = 1;
while(i < n) {
int j = 0;
while(j < n/i){
j++;
}
i = 2 * i;
}
Based on the couple of calculations I've done I think its Big O notation is O(log(n)), but I'm not sure if that is correct. I've tried looking for some examples where the inner loop speeds up at this rate, but I couldn't find anything.
Thanks
One information that surprisingly few people use when calculating complexity is: the sum of terms is equal to the average multiplied by the quantity of terms. In other words, you can replace a changing term by its average, and get the same result.
So, your outer while loop repeats O(log n) times. But the inner while loop, repeats: n, n/2, n/4, n/8, ..., 1, depending on which step of the outer while are we. But (n, n/2, n/4, ..., 1) is a geometric progression, with log(n) terms, and ratio 1/2, which sum is n.(1-1/n)/(1/2) = 2n-2 \in O(n). Its average, therefore, is O(n/log(n)). Since it repeats O(log(n)) times, the whole complexity is O(log(n)*n/log(n)) = O(n)...

What is the time complexity for repeatedly doubling a string?

Consider the following piece of C++ code:
string s = "a";
for (int i = 0; i < n; i++) {
s = s + s; // Concatenate s with itself.
}
Usually, when analyzing the time complexity of a piece of code, we would determine how much work the inner loop does, then multiply it by the number of times the outer loop runs. However, in this case, the amount of work done by the inner loop varies from iteration to iteration, since the string being built up gets longer and longer.
How would you analyze this code to get the big-O time complexity?
The time complexity of this function is Θ(2n). To see why this is, let's look at what the function does, then see how to analyze it.
For starters, let's trace through the loop for n = 3. Before iteration 0, the string s is the string "a". Iteration 0 doubles the length of s to make s = "aa". Iteration 1 doubles the length of s to make s = "aaaa". Iteration 2 then doubles the length of s to make s = "aaaaaaaa".
If you'll notice, after k iterations of the loop, the length of the string s is 2k. This means that each iteration of the loop will take longer and longer to complete, because it will take more and more work to concatenate the string s with itself. Specifically, the kth iteration of the loop will take time Θ(2k) to complete, because the loop iteration constructs a string of size 2k+1.
One way that we could analyze this function would be to multiply the worst-case time complexity of the inner loop by the number of loop iterations. Since each loop iteration takes time O(2n) to finish and there are n loop iterations, we would get that this code takes time O(n · 2n) to finish.
However, it turns out that this analysis is not very good, and in fact will overestimate the time complexity of this code. It is indeed true that this code runs in time O(n · 2n), but remember that big-O notation gives an upper bound on the runtime of a piece of code. This means that the growth rate of this code's runtime is no greater than the growth rate of n · 2n, but it doesn't mean that this is a precise bound. In fact, if we look at the code more precisely, we can get a better bound.
Let's begin by trying to do some better accounting for the work done. The work in this loop can be split apart into two smaller pieces:
The work done in the header of the loop, which increments i and tests whether the loop is done.
The work done in the body of the loop, which concatenates the string with itself.
Here, when accounting for the work in these two spots, we will account for the total amount of work done across all iterations, not just in one iteration.
Let's look at the first of these - the work done by the loop header. This will run exactly n times. Each time, this part of the code will do only O(1) work incrementing i, testing it against n, and deciding whether to continue with the loop. Therefore, the total work done here is Θ(n).
Now let's look at the loop body. As we saw before, iteration k creates a string of length 2k+1 on iteration k, which takes time roughly 2k+1. If we sum this up across all iterations, we get that the work done is (roughly speaking)
21 + 22 + 23 + ... + 2n+1.
So what is this sum? Previously, we got a bound of O(n · 2n) by noting that
21 + 22 + 23 + ... + 2n+1.
< 2n+1 + 2n+1 + 2n+1 + ... + 2n+1
= n · 2n+1 = 2(n · 2n) = Θ(n · 2n)
However, this is a very weak upper bound. If we're more observant, we can recognize the original sum as the sum of a geometric series, where a = 2 and r = 2. Given this, the sum of these terms can be worked out to be exactly
2n+2 - 2 = 4(2n) - 2 = Θ(2n)
In other words, the total work done by the body of the loop, across all iterations, is Θ(2n).
The total work done by the loop is given by the work done in the loop maintenance plus the work done in the body of the loop. This works out to Θ(2n) + Θ(n) = Θ(2n). Therefore, the total work done by the loop is Θ(2n). This grows very quickly, but nowhere near as rapidly as O(n · 2n), which is what our original analysis gave us.
In short, when analyzing a loop, you can always get a conservative upper bound by multiplying the number of iterations of the loop by the maximum work done on any one iteration of that loop. However, doing a more precisely analysis can often give you a much better bound.
Hope this helps!

Resources