Analyzing the time complexity of Coin changing - dynamic-programming

We're doing the classic problem of determining the number of ways that we can make change that amounts to Z given a set of coins.
For example, Amount=5 and Coins={1, 2, 3}. One way we can make 5 is {2, 3}.
The naive recursive solution has a time complexity of factorial time.
f(n) = n * f(n-1) = n!
My professor argued that it actually has a time complexity of O(2^n), because we only choose to use a coin or not. That intuitively makes sense. However how come my recurence doesn't work out to be O(2^n)?
EDIT:
My recurrence is as follows:
f(5, {1, 2, 3})
/ \ .....
f(4, {2, 3}) f(3, {1, 3}) .....
Notice how the branching factor decreases by 1 at every step.
Formally.
T(n) = n*F(n-1) = n!

The recurrence doesn't work out to what you expect it to work out to because it doesn't reflect the number of operations made by the algorithm.
If the algorithm decides for each coin whether to output it or not, then you can model its time complexity with the recurrence T(n) = 2*T(n-1) + O(1) with T(1)=O(1); the intuition is that for each coin you have two options---output the coin or not; this obviously solves to T(n)=O(2^n).

I too was trying to analyze the time complexity for the brute force which performs depth first search:
def countCombinations(coins, n, amount, k=0):
if amount == 0:
return 1
res = 0
for i in range(k, n):
if coins[k] <= amount:
remaining_amount = amount - coins[i] # considering this coin, try for remaining sum
# in next round include this coin too
res += countCombinations(coins, n, remaining_amount, i)
return res
but we can see that the coins which are used in one round is used again in the next round, so at least for 1st coin we have n items at each stage which is equivalent to permutation with repetition n^r for n items available to arrange into r positions at each stage.
ex: [1, 1, 1, 1]; sum = 4
This will generate a recursive tree where for first path we literally have solutions at each diverged subpath until we have the sum=0. so the time complexity is O(sum^n) ie for each stage in the path towards sum we have n different subpaths.
Note however there is another algorithm which uses take/not-take approach and at most there is 2 branch at a node in recursion tree. Hence the time complexity for this algorithm is O(2^(n*m))
ex: say coins = [1, 1] sum = 2 there are 11 nodes/points to visit in the recursion tree for 6 paths(leaves) then complexity is at most 2^(2*2) => 2^4 => 16 (Hence 11 nodes visiting for a max of 16 possibility is correct but little loose on upper bound).
def get_count(coins, n, sum):
if(n == 0): # no coins left, to try a combination that matches the sum
return 0
if(sum == 0): # no more sum left to match, means that we have completely co-incided with our trial
return 1 # (return success)
# don't-include the last coin in the sum calc so, leave it and try rest
excluded = get_count(coins, n-1, sum)
included = 0
if(coins[n-1] <= sum):
# include the last coin in the sum calc, so reduce by its quantity in the sum
# we assume here that n is constant ie, it is supplied in unlimited(we can choose same coin again and again),
included = get_count(coins, n, sum-coins[n-1])
return included+excluded

Related

Program does not run faster as expected when checking much less numbers for finding primes

I made a program to find primes below a given number.
number = int(input("Enter number: "))
prime_numbers = [2] # First prime is needed.
for number_to_be_checked in range(3, number + 1):
square_root = number_to_be_checked ** 0.5
for checker in prime_numbers: # Checker will become
# every prime number below the 'number_to_be_checked'
# variable because we are adding all the prime numbers
# in the 'prime_numbers' list.
if checker > square_root:
prime_numbers.append(number_to_be_checked)
break
elif number_to_be_checked % checker == 0:
break
print(prime_numbers)
This program checks every number below the number given as the input. But primes are of the form 6k ± 1 only. Therefore, instead of checking all the numbers, I defined a generator that generates all the numbers of form 6k ± 1 below the number given as the input. (I added 3 also in the prime_numbers list while initializing it as 2,3 cannot be of the form 6k ± 1)
def potential_primes(number: int) -> int:
"""Generate the numbers potential to be prime"""
# Prime numbers are always of the form 6k ± 1.
number_for_function = number // 6
for k in range(1, number_for_function + 1):
yield 6*k - 1
yield 6*k + 1
Obviously, the program should have been much faster because I am checking comparatively many less numbers. But, counterintuitively the program is slower than before. What could be the reason behind this?
In every six numbers, three are even and one is a multiple of 3. The other two are 6-coprime, so are potentially prime:
6k+0 6k+1 6k+2 6k+3 6k+4 6k+5
even even even
3x 3x
For the three evens your primality check uses only one division (by 2) and for the 4th number, two divisions. In all, five divisions that you seek to avoid.
But each call to a generator has its cost too. If you just replace the call to range with the call to create your generator, but leave the other code as is(*), you are not realizing the full savings potential.
Why? Because (*)if that's the case, while you indeed test only 1/3 of the numbers now, you still test each of them by 2 and 3. Needlessly. And apparently the cost of generator use is too high.
The point to this technique known as wheel factorization is to not test the 6-coprime (in this case) numbers by the primes which are already known to not be their divisors, by construction.
Thus, you must start with e.g. prime_numbers = [5,7] and use it in your divisibility testing loop, not all primes, which start with 2 and 3, which you do not need.
Using nested for loop along with square root will be heavy on computation, rather look at Prime Sieve Algorithm which is much faster but does take some memory.
One way to use the 6n±1 idea is to alternate step sizes in the main loop by stepping 2 then 4. My Python is not good, so this is pseudocode:
function listPrimes(n)
// Deal with low numbers.
if (n < 2) return []
if (n = 2) return [2]
if (n = 3) return [2, 3]
// Main loop
primeList ← [2, 3]
limit ← 1 + sqrt(n) // Calculate square root once.
index ← 5 // We have checked 2 and 3 already.
step ← 2 // Starting step value: 5 + 2 = 7.
while (index <= limit) {
if (isPrime(index)) {
primeList.add(index)
}
index ← index + step
step ← 6 - step // Alternate steps of 2 and 4
}
return primeList
end function

Maximum Sum of XOR operation on a selected element with array elements with an optimize approach

Problem: Choose an element from the array to maximize the sum after XOR all elements in the array.
Input for problem statement:
N=3
A=[15,11,8]
Output:
11
Approach:
(15^15)+(15^11)+(15^8)=11
My Code for brute force approach:
def compute(N,A):
ans=0
for i in A:
xor_sum=0
for j in A:
xor_sum+=(i^j)
if xor_sum>ans:
ans=xor_sum
return ans
Above approach giving the correct answer but wanted to optimize the approach to solve it in O(n) time complexity. Please help me to get this.
If you have integers with a fixed (constant) number of c bites then it should be possible because O(c) = O(1). For simplicity reasons I assume unsigned integers and n to be odd. If n is even then we sometimes have to check both paths in the tree (see solution below). You can adapt the algorithm to cover even n and negative numbers.
find max in array with length n O(n)
if max == 0 return 0 (just 0s in array)
find the position p of the most significant bit of max O(c) = O(1)
p = -1
while (max != 0)
p++
max /= 2
so 1 << p gives a mask for the highest set bit
build a tree where the leaves are the numbers and every level stands for a position of a bit, if there is an edge to the left from the root then there is a number that has bit p set and if there is an edge to the right there is a number that has bit p not set, for the next level we have an edge to the left if there is a number with bit p - 1 set and an edge to the right if bit p - 1 is not set and so on, this can be done in O(cn) = O(n)
go through the array and count how many times a bit at position i (i from 0 to p) is set => sum array O(cn) = O(n)
assign the root of the tree to node x
now for each i from p to 0 do the following:
if x has only one edge => x becomes its only child node
else if sum[i] > n / 2 => x becomes its right child node
else x becomes its left child node
in this step we choose the best path through the tree that gives us the most ones when xoring O(cn) = O(n)
xor all the elements in the array with the value of x and sum them up to get the result, actually you could have built the result already in the step before by adding sum[i] * (1 << i) to the result if going left and (n - sum[i]) * (1 << i) if going right O(n)
All the sequential steps are O(n) and therefore overall the algorithm is also O(n).

Number of Different Subsequences GCDs

Number of Different Subsequences GCDs
You are given an array nums that consists of positive integers.
The GCD of a sequence of numbers is defined as the greatest integer that divides all the numbers in the sequence evenly.
For example, the GCD of the sequence [4,6,16] is 2.
A subsequence of an array is a sequence that can be formed by removing some elements (possibly none) of the array.
For example, [2,5,10] is a subsequence of [1,2,1,2,4,1,5,10].
Return the number of different GCDs among all non-empty subsequences of nums.
Example 1:
Input: nums = [6,10,3]
Output: 5
Explanation: The figure shows all the non-empty subsequences and their GCDs.
The different GCDs are 6, 10, 3, 2, and 1.
from itertools import permutations
import math
class Solution:
def countDifferentSubsequenceGCDs(self, nums: List[int]) -> int:
s = set()
g =0
for i in range(1,len(nums)+1):
comb = combinations(nums,i)
for i in comb:
if len(i)==1:
u = i[0]
s.add(u)
else:
g = math.gcd(i[0],i[1])
s.add(g)
if len(i)>2:
for j in range(2,len(i)):
g = math.gcd(g,i[j])
s.add(g)
g = 0
y = len(s)
return y
I am getting TLE for this input. Can someone pls help?
[5852,6671,170275,141929,2414,99931,179958,56781,110656,190278,7613,138315,58116,114790,129975,144929,61102,90624,60521,177432,57353,199478,120483,75965,5634,109100,145872,168374,26215,48735,164982,189698,77697,31691,194812,87215,189133,186435,131282,110653,133096,175717,49768,79527,74491,154031,130905,132458,103116,154404,9051,125889,63633,194965,105982,108610,174259,45353,96240,143865,184298,176813,193519,98227,22667,115072,174001,133281,28294,42913,136561,103090,97131,128371,192091,7753,123030,11400,80880,184388,161169,155500,151566,103180,169649,44657,44196,131659,59491,3225,52303,141458,143744,60864,106026,134683,90132,151466,92609,120359,70590,172810,143654,159632,191208,1497,100582,194119,134349,33882,135969,147157,53867,111698,14713,126118,95614,149422,145333,52387,132310,108371,127121,93531,108639,90723,416,141159,141587,163445,160551,86806,120101,157249,7334,60190,166559,46455,144378,153213,47392,24013,144449,66924,8509,176453,18469,21820,4376,118751,3817,197695,198073,73715,65421,70423,28702,163789,48395,90289,76097,18224,43902,41845,66904,138250,44079,172139,71543,169923,186540,77200,119198,184190,84411,130153,124197,29935,6196,81791,101334,90006,110342,49294,67744,28512,66443,191406,133724,54812,158768,113156,5458,59081,4684,104154,38395,9261,188439,42003,116830,184709,132726,177780,111848,142791,57829,165354,182204,135424,118187,58510,137337,170003,8048,103521,176922,150955,84213,172969,165400,111752,15411,193319,78278,32948,55610,12437,80318,18541,20040,81360,78088,194994,41474,109098,148096,66155,34182,2224,146989,9940,154819,57041,149496,120810,44963,184556,163306,133399,9811,99083,52536,90946,25959,53940,150309,176726,113496,155035,50888,129067,27375,174577,102253,77614,132149,131020,4509,85288,160466,105468,73755,4743,41148,52653,85916,147677,35427,88892,112523,55845,69871,176805,25273,99414,143558,90139,180122,140072,127009,139598,61510,17124,190177,10591,22199,34870,44485,43661,141089,55829,70258,198998,87094,157342,132616,66924,96498,88828,89204,29862,76341,61654,158331,187462,128135,35481,152033,144487,27336,84077,10260,106588,19188,99676,38622,32773,89365,30066,161268,153986,99101,20094,149627,144252,58646,148365,21429,69921,95655,77478,147967,140063,29968,120002,72662,28241,11994,77526,3246,160872,175745,3814,24035,108406,30174,10492,49263,62819,153825,110367,42473,30293,118203,43879,178492,63287,41667,195037,26958,114060,99164,142325,77077,144235,66430,186545,125046,82434,26249,54425,170932,83209,10387,7147,2755,77477,190444,156388,83952,117925,102569,82125,104479,16506,16828,83192,157666,119501,29193,65553,56412,161955,142322,180405,122925,173496,93278,67918,48031,141978,54484,80563,52224,64588,94494,21331,73607,23440,197470,117415,23722,170921,150565,168681,88837,59619,102362,80422,10762,85785,48972,83031,151784,79380,64448,87644,26463,142666,160273,151778,156229,24129,64251,57713,5341,63901,105323,18961,70272,144496,18591,191148,19695,5640,166562,2600,76238,196800,94160,129306,122903,40418,26460,131447,86008,20214,133503,174391,45415,47073,39208,37104,83830,80118,28018,185946,134836,157783,76937,33109,54196,37141,142998,189433,8326,82856,163455,176213,144953,195608,180774,53854,46703,78362,113414,140901,41392,12730,187387,175055,64828,66215,16886,178803,117099,112767,143988,65594,141919,115186,141050,118833,2849]
I'm going to add "an answer" here because most "not horribly slow" programs I've seen for this are way too elaborate.
Call the input xs. The fastest way I know of asks, for each integer j in 1 through max(xs), can j be the gcd of some non-empty subset of xs? Of course if max(xs) can be huge, that can be slow. But in the context you apparently took this from (LeetCode), it cannot be huge.
So, given j, how do we know whether some subset's gcd is j? Actually easy! We look at all and only the multiples of j in xs. The gcd of all of those is at least j. If, at any point along the way, their gcd so far is j, we found a subset whose gcd is j. Else the running gcd exceeds j after processing all of j's multiples, so no subset's gcd is j.
def numgcds(xs):
from math import gcd
limit = max(xs) + 1
result = 0
xsset = set(xs)
for j in range(1, limit):
g = 0
for x in range(j, limit, j):
if x in xsset:
g = gcd(x, g)
if g == j:
result += 1
break
return result
Where L is max(xs), worst-case runtime is O(L * log(L)). Across outer loop iterations, the inner loop goes around (at worst) L times at first, then L/2 times, then L/3, and so on. That sums to L*(1/1 + 1/2 + 1/3 + ... + 1/L). The second factor (the sum of reciprocals) is the L'th "harmonic number", and is approximately the natural logarithm of L.
More Gonzo
I don't really like having the runtime depend on the largest integer in the input. For example, numgcds([20000000]) takes 20 million iterations of the outer loop to determine that there's only one gcd, and can take appreciable time (about 30 seconds on my box just now).
Instead, with more code, we can build some dicts that eliminate all searching. For each divisor d of an integer in xs, d2xs[d] is the list of multiples of d in xs. The keys of d2xs are the only possible gcds we need to check, and a key's associated values are exactly (no searching needed) the multiples of the key in xs.
The collection of all possible divisors of all integers in xs can be found by factoring each integer in xs, and generating all possible combinations of its factors' prime powers.
This is harder to code, but can run very much faster. numgcds([20000000]) is essentially instant. And it runs about 10 times faster for the largish example you gave.
def gendivisors(x):
from collections import Counter
from itertools import product
from math import prod
c = Counter(factor(x))
pows = []
for p, k in c.items():
pows.append([p**i for i in range(k+1)])
for t in product(*pows):
yield prod(t)
def numgcds(xs):
from math import gcd
from collections import defaultdict
d2xs = defaultdict(list)
for x in xs:
for d in gendivisors(x):
d2xs[d].append(x)
result = 0
for j, mults in d2xs.items():
g = 0
for x in mults:
g = gcd(x, g)
if g == j:
result += 1
break
return result
I'm not including code for factor(n) - pick your favorite. The code requires it return an iterable (list, generator iterator, tuple, doesn't matter) of all n's prime factors. Order doesn't matter. As special cases, list(factor(i)) should return [i] for i equal to 0 or 1.
For ordinary cases, list(factor(p)) == [p] for a prime p, and, e.g., sorted(factor(20)) == [2, 2, 5].
Worst-case timing is much harder to nail, but the key bit is that a reasonable implementation of factor(n) will have worst-case time O(sqrt(n)).

What is the time complexity of this agorithm (that solves leetcode question 650) (question 2)?

Hello I have been working on https://leetcode.com/problems/2-keys-keyboard/ and came upon this dynamic programming question.
You start with an 'A' on a blank page and you get a number n when you are done you should have n times 'A' on the page. The catch is you are allowed only 2 operations copy (and you can only copy the total amount of A's currently on the page) and paste --> find the minimum number of operations to get n 'A' on the page.
I solved this problem but then found a better solution in the discussion section of leetcode --> and I can't figure out it's time complexity.
def minSteps(self, n):
factors = 0
i=2
while i <= n:
while n % i == 0:
factors += i
n /= i
i+=1
return factors
The way this works is i is never gonna be bigger than the biggest prime factor p of n so the outer loop is O(p) and the inner while loop is basically O(logn) since we are dividing n /= i at each iteration.
But the way I look at it we are doing O(logn) divisions in total for the inner loop while the outer loop is O(p) so using aggregate analysis this function is basically O(max(p, logn)) is this correct ?
Any help is welcome.
Your reasoning is correct: O(max(p, logn)) gives the time complexity, assuming that arithmetic operations take constant time. This assumption is not true for arbitrary large n, that would not fit in the machine's fixed-size number storage, and where you would need Big-Integer operations that have non-constant time complexity. But I will ignore that.
It is still odd to express the complexity in terms of p when that is not the input (but derived from it). Your input is only n, so it makes sense to express the complexity in terms of n alone.
Worst Case
Clearly, when n is prime, the algorithm is O(n) -- the inner loop never iterates.
For a prime n, the algorithm will take more time than for n+1, as even the smallest factor of n+1 (i.e. 2), will halve the number of iterations of the outer loop, and yet only add 1 block of constant work in the inner loop.
So O(n) is the worst case.
Average Case
For the average case, we note that the division of n happens just as many times as n has prime factors (counting duplicates). For example, for n = 12, we have 3 divisions, as n = 2·2·3
The average number of prime factors for 1 < n < x approaches loglogn + B, where B is some constant. So we could say the average time complexity for the total execution of the inner loop is O(loglogn).
We need to add to that the execution of the outer loop. This corresponds to the average greatest prime factor. For 1 < n < x this average approaches C.n/logn, and so we have:
O(n/logn + loglogn)
Now n/logn is the more important term here, so this simplifies to:
O(n/logn)

How can I reduce the time complexity of the given python code?

I have this python program which computes the "Square Free Numbers" of a given number. I'm facing problem regarding the time complexity that is I'm getting the error as "Time Limit Exceeded" in an online compiler.
number = int(input())
factors = []
perfectSquares = []
count = 0
total_len = 0
# Find All the Factors of the given number
for i in range(1, number):
if number%i == 0:
factors.append(i)
# Find total number of factors
total_len = len(factors)
for items in factors:
for i in range(1,total_len):
# Eleminate perfect square numbers
if items == i * i:
if items == 1:
factors.remove(items)
count += 1
else:
perfectSquares.append(items)
factors.remove(items)
count += 1
# Eleminate factors that are divisible by the perfect squares
for i in factors:
for j in perfectSquares:
if i%j == 0:
count +=1
# Print Total Square Free numbers
total_len -= count
print(total_len)
How can I reduce the time complexity of this program? That is how can I reduce the for loops so the program gets executed with a smaller time complexity?
Algorithmic Techniques for Reducing Time Complexity(TC) of a python code.
In order to reduce time complexity of a code, it's very much necessary to reduce the usage of loops whenever and wherever possible.
I'll divide your code's logic part into 5 sections and suggest optimization in each one of them.
Section 1 - Declaration of Variables and taking input
number = int(input())
factors = []
perfectSquares = []
count = 0
total_len = 0
You can easily omit declaration of perfect squares, count and total_length, as they aren't needed, as explained further. This will reduce both Time and Space complexities of your code.
Also, you can use Fast IO, in order to speed up INPUTS and OUTPUTS
This is done by using 'stdin.readline', and 'stdout.write'.
Section 2 - Finding All factors
for i in range(1, number):
if number%i == 0:
factors.append(i)
Here, you can use List comprehension technique to create the factor list, due to the fact that List comprehension is faster than looping statements.
Also, you can just iterate till square root of the Number, instead of looping till number itself, thereby reducing time complexity exponentially.
Above code section guns down to...
After applying '1' hack
factors = [for i in range(1, number) if number%i == 0]
After applying '2' hack - Use from_iterable to store more than 1 value in each iteration in list comprehension
factors = list( chain.from_iterable(
(i, int(number/i)) for i in range(2, int(number**0.5)+1)
if number%i == 0
))
Section 3 - Eliminating Perfect Squares
# Find total number of factors
total_len = len(factors)
for items in factors:
for i in range(1,total_len):
# Eleminate perfect square numbers
if items == i * i:
if items == 1:
factors.remove(items)
count += 1
else:
perfectSquares.append(items)
factors.remove(items)
count += 1
Actually you can completely omit this part, and just add additional condition to the Section 2, namely ... type(i**0.5) != int, to eliminate those numbers which have integer square roots, hence being perfect squares themselves.
Implement as follows....
factors = list( chain.from_iterable(
(i, int(number/i)) for i in range(2, int(number**0.5)+1)
if number%i == 0 and type(i**0.5) != int
))
Section 4 - I think this Section isn't needed because Square Free Numbers doesn't have such Restriction
Section 5 - Finalizing Count, Printing Count
There's absolutely no need of counter, you can just compute length of factors list, and use it as Count.
OPTIMISED CODES
Way 1 - Little Faster
number = int(input())
# Find Factors of the given number
factors = []
for i in range(2, int(number**0.5)+1):
if number%i == 0 and type(i**0.5) != int:
factors.extend([i, int(number/i)])
print([1] + factors)
Way 2 - Optimal Programming - Very Fast
from itertools import chain
from sys import stdin, stdout
number = int(stdin.readline())
factors = list( chain.from_iterable(
(i, int(number/i)) for i in range(2, int(number**0.5)+1)
if number%i == 0 and type(i**0.5) != int
))
stdout.write(', '.join(map(str, [1] + factors)))
First of all, you only need to check for i in range(1, number/2):, since number/2 + 1 and greater cannot be factors.
Second, you can compute the number of perfect squares that could be factors in sublinear time:
squares = []
for i in range(1, math.floor(math.sqrt(number/2))):
squares.append(i**2)
Third, you can search for factors and when you find one, check that it is not divisible by a square, and only then add it to the list of factors.
This approach will save you all the time of your for items in factors nested loop block, as well as the next block. I'm not sure if it will definitely be faster, but it is less wasteful.
I used the code provided in the answer above but it didn't give me the correct answer. This actually computes the square free list of factors of a number.
number = int(input())
factors = [
i for i in range(2, int(number/2)+1)
if number%i == 0 and int(int(math.sqrt(i))**2)!=i
]
print([1] + factors)

Resources