Number of Different Subsequences GCDs - python-3.x
Number of Different Subsequences GCDs
You are given an array nums that consists of positive integers.
The GCD of a sequence of numbers is defined as the greatest integer that divides all the numbers in the sequence evenly.
For example, the GCD of the sequence [4,6,16] is 2.
A subsequence of an array is a sequence that can be formed by removing some elements (possibly none) of the array.
For example, [2,5,10] is a subsequence of [1,2,1,2,4,1,5,10].
Return the number of different GCDs among all non-empty subsequences of nums.
Example 1:
Input: nums = [6,10,3]
Output: 5
Explanation: The figure shows all the non-empty subsequences and their GCDs.
The different GCDs are 6, 10, 3, 2, and 1.
from itertools import permutations
import math
class Solution:
def countDifferentSubsequenceGCDs(self, nums: List[int]) -> int:
s = set()
g =0
for i in range(1,len(nums)+1):
comb = combinations(nums,i)
for i in comb:
if len(i)==1:
u = i[0]
s.add(u)
else:
g = math.gcd(i[0],i[1])
s.add(g)
if len(i)>2:
for j in range(2,len(i)):
g = math.gcd(g,i[j])
s.add(g)
g = 0
y = len(s)
return y
I am getting TLE for this input. Can someone pls help?
[5852,6671,170275,141929,2414,99931,179958,56781,110656,190278,7613,138315,58116,114790,129975,144929,61102,90624,60521,177432,57353,199478,120483,75965,5634,109100,145872,168374,26215,48735,164982,189698,77697,31691,194812,87215,189133,186435,131282,110653,133096,175717,49768,79527,74491,154031,130905,132458,103116,154404,9051,125889,63633,194965,105982,108610,174259,45353,96240,143865,184298,176813,193519,98227,22667,115072,174001,133281,28294,42913,136561,103090,97131,128371,192091,7753,123030,11400,80880,184388,161169,155500,151566,103180,169649,44657,44196,131659,59491,3225,52303,141458,143744,60864,106026,134683,90132,151466,92609,120359,70590,172810,143654,159632,191208,1497,100582,194119,134349,33882,135969,147157,53867,111698,14713,126118,95614,149422,145333,52387,132310,108371,127121,93531,108639,90723,416,141159,141587,163445,160551,86806,120101,157249,7334,60190,166559,46455,144378,153213,47392,24013,144449,66924,8509,176453,18469,21820,4376,118751,3817,197695,198073,73715,65421,70423,28702,163789,48395,90289,76097,18224,43902,41845,66904,138250,44079,172139,71543,169923,186540,77200,119198,184190,84411,130153,124197,29935,6196,81791,101334,90006,110342,49294,67744,28512,66443,191406,133724,54812,158768,113156,5458,59081,4684,104154,38395,9261,188439,42003,116830,184709,132726,177780,111848,142791,57829,165354,182204,135424,118187,58510,137337,170003,8048,103521,176922,150955,84213,172969,165400,111752,15411,193319,78278,32948,55610,12437,80318,18541,20040,81360,78088,194994,41474,109098,148096,66155,34182,2224,146989,9940,154819,57041,149496,120810,44963,184556,163306,133399,9811,99083,52536,90946,25959,53940,150309,176726,113496,155035,50888,129067,27375,174577,102253,77614,132149,131020,4509,85288,160466,105468,73755,4743,41148,52653,85916,147677,35427,88892,112523,55845,69871,176805,25273,99414,143558,90139,180122,140072,127009,139598,61510,17124,190177,10591,22199,34870,44485,43661,141089,55829,70258,198998,87094,157342,132616,66924,96498,88828,89204,29862,76341,61654,158331,187462,128135,35481,152033,144487,27336,84077,10260,106588,19188,99676,38622,32773,89365,30066,161268,153986,99101,20094,149627,144252,58646,148365,21429,69921,95655,77478,147967,140063,29968,120002,72662,28241,11994,77526,3246,160872,175745,3814,24035,108406,30174,10492,49263,62819,153825,110367,42473,30293,118203,43879,178492,63287,41667,195037,26958,114060,99164,142325,77077,144235,66430,186545,125046,82434,26249,54425,170932,83209,10387,7147,2755,77477,190444,156388,83952,117925,102569,82125,104479,16506,16828,83192,157666,119501,29193,65553,56412,161955,142322,180405,122925,173496,93278,67918,48031,141978,54484,80563,52224,64588,94494,21331,73607,23440,197470,117415,23722,170921,150565,168681,88837,59619,102362,80422,10762,85785,48972,83031,151784,79380,64448,87644,26463,142666,160273,151778,156229,24129,64251,57713,5341,63901,105323,18961,70272,144496,18591,191148,19695,5640,166562,2600,76238,196800,94160,129306,122903,40418,26460,131447,86008,20214,133503,174391,45415,47073,39208,37104,83830,80118,28018,185946,134836,157783,76937,33109,54196,37141,142998,189433,8326,82856,163455,176213,144953,195608,180774,53854,46703,78362,113414,140901,41392,12730,187387,175055,64828,66215,16886,178803,117099,112767,143988,65594,141919,115186,141050,118833,2849]
I'm going to add "an answer" here because most "not horribly slow" programs I've seen for this are way too elaborate.
Call the input xs. The fastest way I know of asks, for each integer j in 1 through max(xs), can j be the gcd of some non-empty subset of xs? Of course if max(xs) can be huge, that can be slow. But in the context you apparently took this from (LeetCode), it cannot be huge.
So, given j, how do we know whether some subset's gcd is j? Actually easy! We look at all and only the multiples of j in xs. The gcd of all of those is at least j. If, at any point along the way, their gcd so far is j, we found a subset whose gcd is j. Else the running gcd exceeds j after processing all of j's multiples, so no subset's gcd is j.
def numgcds(xs):
from math import gcd
limit = max(xs) + 1
result = 0
xsset = set(xs)
for j in range(1, limit):
g = 0
for x in range(j, limit, j):
if x in xsset:
g = gcd(x, g)
if g == j:
result += 1
break
return result
Where L is max(xs), worst-case runtime is O(L * log(L)). Across outer loop iterations, the inner loop goes around (at worst) L times at first, then L/2 times, then L/3, and so on. That sums to L*(1/1 + 1/2 + 1/3 + ... + 1/L). The second factor (the sum of reciprocals) is the L'th "harmonic number", and is approximately the natural logarithm of L.
More Gonzo
I don't really like having the runtime depend on the largest integer in the input. For example, numgcds([20000000]) takes 20 million iterations of the outer loop to determine that there's only one gcd, and can take appreciable time (about 30 seconds on my box just now).
Instead, with more code, we can build some dicts that eliminate all searching. For each divisor d of an integer in xs, d2xs[d] is the list of multiples of d in xs. The keys of d2xs are the only possible gcds we need to check, and a key's associated values are exactly (no searching needed) the multiples of the key in xs.
The collection of all possible divisors of all integers in xs can be found by factoring each integer in xs, and generating all possible combinations of its factors' prime powers.
This is harder to code, but can run very much faster. numgcds([20000000]) is essentially instant. And it runs about 10 times faster for the largish example you gave.
def gendivisors(x):
from collections import Counter
from itertools import product
from math import prod
c = Counter(factor(x))
pows = []
for p, k in c.items():
pows.append([p**i for i in range(k+1)])
for t in product(*pows):
yield prod(t)
def numgcds(xs):
from math import gcd
from collections import defaultdict
d2xs = defaultdict(list)
for x in xs:
for d in gendivisors(x):
d2xs[d].append(x)
result = 0
for j, mults in d2xs.items():
g = 0
for x in mults:
g = gcd(x, g)
if g == j:
result += 1
break
return result
I'm not including code for factor(n) - pick your favorite. The code requires it return an iterable (list, generator iterator, tuple, doesn't matter) of all n's prime factors. Order doesn't matter. As special cases, list(factor(i)) should return [i] for i equal to 0 or 1.
For ordinary cases, list(factor(p)) == [p] for a prime p, and, e.g., sorted(factor(20)) == [2, 2, 5].
Worst-case timing is much harder to nail, but the key bit is that a reasonable implementation of factor(n) will have worst-case time O(sqrt(n)).
Related
Improving the complexity of Brocard's problem?
Brocard's problem is n! + 1 = m^2. The solutions to this problems are pairs of integers called Brown numbers (4,5), etc, of which only three are known. A very literal implementation to Brocard's problem: import math def brocard(n,m): if math.factorial(n)+1 == m**2: return (n,m) else: return a=10000 for n in range(a): for m in range(a): b=brocard(n,m) if b is not None: print(b) The time complexity of this should be O(n^2) because of the nested for loops with differing variables and the complexity of whatever math.factorial algorithm is (apparently divide-and-conquer). Is there any way to improve upon O(n^2)? There are other interpretations on SO like this. How does the time complexity of this compare with my implementation?
Your algorithm is O(n^3). You have two nested loops, and inside you use factorial(), having O(n) complexity itself. Your algorithm tests all (n,m) combinations, even those where factorial(n) and m^2 are far apart, e.g. n=1 and m=10000. You always recompute the factorial(n) deep inside the loop, although it's independent of the inner loop variable m. So, it could be moved outside of the inner loop. And, instead of always computing factorial(n) from scratch, you could do that incrementally. Whenever you increment n by 1, you can multiply the previous factorial by n. A different, better approach would be not to use nested loops, but to always keep n and m in a number range so that factorial(n) is close to m^2, to avoid checking number pairs that are vastly off. We can do this by deciding which variable to increment next. If the factorial is smaller, then the next brocard pair needs a bigger n. If the square is smaller, we need a bigger m. In pseudo code, that would be n = 1; m = 1; factorial = 1; while n < 10000 and m < 10000 if factorial + 1 == m^2 found a brocard pair // the next brocard pair will have different n and m, // so we can increment both n = n + 1 factorial = factorial * n m = m + 1 else if factorial + 1 < m^2 // n is too small for the current m n = n + 1 factorial = factorial * n else // m is too small for the given n m = m + 1 In each loop iteration, we either increment n or m, so we can have at most 20000 iterations. There is no inner loop in the algorithm. We have O(n). So, this should be fast enough for n and m up to the millions range. P.S. There are still some optimizations possible. Factorials (after n=1, known to have no brocard pair) are always even numbers, so m^2 must be odd to satisfy the brocard condition, meaning that we can always increment m by 2, skipping the even number in between. For larger n values, the factorial increases much faster than the square. So, instead of incrementing m until its square reaches the factorial+1 value, we could recompute the next plausible m as integer square root of factorial+1. Or, using the square root approach, just compute the integer square root of factorial(n), and check if it matches, without any incremental steps for m.
Average of even numbers in a list
Devise an algorithm to compute the following: given a list of numbers, find the average of the even numbers in the list, e.g., in [1,2,4,1,2,9,4], the even numbers are 2,4,2 and 4, and the average of them is (2+4+2+4)/4=3.
import statistics l = [1,2,4,1,2,9,4] result = statistics.mean(x for x in l if x % 2 == 0) The core of this solution is statistics.mean() which calculates the mean value of an iterable of numbers and the list comprehension with an if.
One way to do it is the following: l = [1,2,4,1,2,9,4] even = list(filter(lambda elem: elem % 2 == 0, l)) result = sum(even) / len(even) First you use filter to find all evens and then you calculate the average.
Construct powerset without complements
Starting from this question I've built this code: import itertools n=4 nodes = set(range(0,n)) ss = set() for i in range(1,n+1): ss = ss.union( set(itertools.combinations(range(0,n), i))) ss2 = set() for s in ss: cs = [] for i in range(0,n): if not(i in s): cs.append(i) cs=tuple(cs) if not(s in ss2) and not(cs in ss2): ss2.add(s) ss = ss2 The code construct all subsets of S={0,1,...,n-1} (i) without complements (example, for n=4, either (1,3) or (0,2) is contained, which one does not matter); (ii) without the empty set, but (iii) with S; the result is in ss. Is there a more compact way to do the job? I do not care if the result is a set/list of sets/lists/tuples. (The result contains 2**(n-1) elements) Additional options: favorite subset or complement that has less elements output sorted by increasing size
When you exclude complements, you actually exclude half of the combinations. So you could imagine generating all combinations and then kick out the last half of them. There you must be sure not to kick out a combination together with its complement, but the way you have them ordered, that will not happen. Further along this idea, you don't even need to generate combinations that have a size that is more than n/2. For even values of n, you would need to halve the list of combinations with size n/2. Here is one way to achieve all that: import itertools n=4 half = n//2 # generate half of the combinations ss = [list(itertools.combinations(range(0,n), i)) for i in range(1, half+1)] # if n is even, kick out half of the last list if n % 2 == 0: ss[-1] = ss[-1][0:len(ss[-1])//2] # flatten ss = [y for x in ss for y in x] print(ss)
Analyzing the time complexity of Coin changing
We're doing the classic problem of determining the number of ways that we can make change that amounts to Z given a set of coins. For example, Amount=5 and Coins={1, 2, 3}. One way we can make 5 is {2, 3}. The naive recursive solution has a time complexity of factorial time. f(n) = n * f(n-1) = n! My professor argued that it actually has a time complexity of O(2^n), because we only choose to use a coin or not. That intuitively makes sense. However how come my recurence doesn't work out to be O(2^n)? EDIT: My recurrence is as follows: f(5, {1, 2, 3}) / \ ..... f(4, {2, 3}) f(3, {1, 3}) ..... Notice how the branching factor decreases by 1 at every step. Formally. T(n) = n*F(n-1) = n!
The recurrence doesn't work out to what you expect it to work out to because it doesn't reflect the number of operations made by the algorithm. If the algorithm decides for each coin whether to output it or not, then you can model its time complexity with the recurrence T(n) = 2*T(n-1) + O(1) with T(1)=O(1); the intuition is that for each coin you have two options---output the coin or not; this obviously solves to T(n)=O(2^n).
I too was trying to analyze the time complexity for the brute force which performs depth first search: def countCombinations(coins, n, amount, k=0): if amount == 0: return 1 res = 0 for i in range(k, n): if coins[k] <= amount: remaining_amount = amount - coins[i] # considering this coin, try for remaining sum # in next round include this coin too res += countCombinations(coins, n, remaining_amount, i) return res but we can see that the coins which are used in one round is used again in the next round, so at least for 1st coin we have n items at each stage which is equivalent to permutation with repetition n^r for n items available to arrange into r positions at each stage. ex: [1, 1, 1, 1]; sum = 4 This will generate a recursive tree where for first path we literally have solutions at each diverged subpath until we have the sum=0. so the time complexity is O(sum^n) ie for each stage in the path towards sum we have n different subpaths. Note however there is another algorithm which uses take/not-take approach and at most there is 2 branch at a node in recursion tree. Hence the time complexity for this algorithm is O(2^(n*m)) ex: say coins = [1, 1] sum = 2 there are 11 nodes/points to visit in the recursion tree for 6 paths(leaves) then complexity is at most 2^(2*2) => 2^4 => 16 (Hence 11 nodes visiting for a max of 16 possibility is correct but little loose on upper bound). def get_count(coins, n, sum): if(n == 0): # no coins left, to try a combination that matches the sum return 0 if(sum == 0): # no more sum left to match, means that we have completely co-incided with our trial return 1 # (return success) # don't-include the last coin in the sum calc so, leave it and try rest excluded = get_count(coins, n-1, sum) included = 0 if(coins[n-1] <= sum): # include the last coin in the sum calc, so reduce by its quantity in the sum # we assume here that n is constant ie, it is supplied in unlimited(we can choose same coin again and again), included = get_count(coins, n, sum-coins[n-1]) return included+excluded
find primes in a certain range efficiently
This is code an algorithm I found for Sieve of Eratosthenes for python3. What I want to do is edit it so the I can input a range of bottom and top and then input a list of primes up to the bottom one and it will output a list of primes within that range. However, I am not quite sure how to do that. If you can help that would be greatly appreciated. from math import sqrt def sieve(end): if end < 2: return [] #The array doesn't need to include even numbers lng = ((end//2)-1+end%2) # Create array and assume all numbers in array are prime sieve = [True]*(lng+1) # In the following code, you're going to see some funky # bit shifting and stuff, this is just transforming i and j # so that they represent the proper elements in the array. # The transforming is not optimal, and the number of # operations involved can be reduced. # Only go up to square root of the end for i in range(int(sqrt(end)) >> 1): # Skip numbers that aren’t marked as prime if not sieve[i]: continue # Unmark all multiples of i, starting at i**2 for j in range( (i*(i + 3) << 1) + 3, lng, (i << 1) + 3): sieve[j] = False # Don't forget 2! primes = [2] # Gather all the primes into a list, leaving out the composite numbers primes.extend([(i << 1) + 3 for i in range(lng) if sieve[i]]) return primes
I think the following is working: def extend_erathostene(A, B, prime_up_to_A): sieve = [ True ]* (B-A) for p in prime_up_to_A: # first multiple of p greater than A m0 = ((A+p-1)/p)*p for m in range( m0, B, p): sieve[m-A] = False limit = int(ceil(sqrt(B))) for p in range(A,limit+1): if sieve[p-A]: for m in range(p*2, B, p): sieve[m-A] = False return prime_up_to_A + [ A+c for (c, isprime) in enumerate(sieve) if isprime]
This problem is known as the "segmented sieve of Eratosthenes." Google gives several useful references.
You already have the primes from 2 to end, so you just need to filter the list that is returned.
One way is to run the sieve code with end = top and modify the last line to give you only numbers bigger than bottom: If the range is small compared with it's magnitude (i.e. top-bottom is small compared with bottom), then you better use a different algorithm: Start from bottom and iterate over the odd numbers checking whether they are prime. You need an isprime(n) function which just checks whether n is divisible by all the odd numbers from 1 to sqrt(n): def isprime(n): i=2 while (i*i<=n): if n%i==0: return False i+=1 return True