I have this python program which computes the "Square Free Numbers" of a given number. I'm facing problem regarding the time complexity that is I'm getting the error as "Time Limit Exceeded" in an online compiler.
number = int(input())
factors = []
perfectSquares = []
count = 0
total_len = 0
# Find All the Factors of the given number
for i in range(1, number):
if number%i == 0:
factors.append(i)
# Find total number of factors
total_len = len(factors)
for items in factors:
for i in range(1,total_len):
# Eleminate perfect square numbers
if items == i * i:
if items == 1:
factors.remove(items)
count += 1
else:
perfectSquares.append(items)
factors.remove(items)
count += 1
# Eleminate factors that are divisible by the perfect squares
for i in factors:
for j in perfectSquares:
if i%j == 0:
count +=1
# Print Total Square Free numbers
total_len -= count
print(total_len)
How can I reduce the time complexity of this program? That is how can I reduce the for loops so the program gets executed with a smaller time complexity?
Algorithmic Techniques for Reducing Time Complexity(TC) of a python code.
In order to reduce time complexity of a code, it's very much necessary to reduce the usage of loops whenever and wherever possible.
I'll divide your code's logic part into 5 sections and suggest optimization in each one of them.
Section 1 - Declaration of Variables and taking input
number = int(input())
factors = []
perfectSquares = []
count = 0
total_len = 0
You can easily omit declaration of perfect squares, count and total_length, as they aren't needed, as explained further. This will reduce both Time and Space complexities of your code.
Also, you can use Fast IO, in order to speed up INPUTS and OUTPUTS
This is done by using 'stdin.readline', and 'stdout.write'.
Section 2 - Finding All factors
for i in range(1, number):
if number%i == 0:
factors.append(i)
Here, you can use List comprehension technique to create the factor list, due to the fact that List comprehension is faster than looping statements.
Also, you can just iterate till square root of the Number, instead of looping till number itself, thereby reducing time complexity exponentially.
Above code section guns down to...
After applying '1' hack
factors = [for i in range(1, number) if number%i == 0]
After applying '2' hack - Use from_iterable to store more than 1 value in each iteration in list comprehension
factors = list( chain.from_iterable(
(i, int(number/i)) for i in range(2, int(number**0.5)+1)
if number%i == 0
))
Section 3 - Eliminating Perfect Squares
# Find total number of factors
total_len = len(factors)
for items in factors:
for i in range(1,total_len):
# Eleminate perfect square numbers
if items == i * i:
if items == 1:
factors.remove(items)
count += 1
else:
perfectSquares.append(items)
factors.remove(items)
count += 1
Actually you can completely omit this part, and just add additional condition to the Section 2, namely ... type(i**0.5) != int, to eliminate those numbers which have integer square roots, hence being perfect squares themselves.
Implement as follows....
factors = list( chain.from_iterable(
(i, int(number/i)) for i in range(2, int(number**0.5)+1)
if number%i == 0 and type(i**0.5) != int
))
Section 4 - I think this Section isn't needed because Square Free Numbers doesn't have such Restriction
Section 5 - Finalizing Count, Printing Count
There's absolutely no need of counter, you can just compute length of factors list, and use it as Count.
OPTIMISED CODES
Way 1 - Little Faster
number = int(input())
# Find Factors of the given number
factors = []
for i in range(2, int(number**0.5)+1):
if number%i == 0 and type(i**0.5) != int:
factors.extend([i, int(number/i)])
print([1] + factors)
Way 2 - Optimal Programming - Very Fast
from itertools import chain
from sys import stdin, stdout
number = int(stdin.readline())
factors = list( chain.from_iterable(
(i, int(number/i)) for i in range(2, int(number**0.5)+1)
if number%i == 0 and type(i**0.5) != int
))
stdout.write(', '.join(map(str, [1] + factors)))
First of all, you only need to check for i in range(1, number/2):, since number/2 + 1 and greater cannot be factors.
Second, you can compute the number of perfect squares that could be factors in sublinear time:
squares = []
for i in range(1, math.floor(math.sqrt(number/2))):
squares.append(i**2)
Third, you can search for factors and when you find one, check that it is not divisible by a square, and only then add it to the list of factors.
This approach will save you all the time of your for items in factors nested loop block, as well as the next block. I'm not sure if it will definitely be faster, but it is less wasteful.
I used the code provided in the answer above but it didn't give me the correct answer. This actually computes the square free list of factors of a number.
number = int(input())
factors = [
i for i in range(2, int(number/2)+1)
if number%i == 0 and int(int(math.sqrt(i))**2)!=i
]
print([1] + factors)
Related
I'm trying to find the number of palindromes in a certain range using the Python code below:
def test(n,m):
return len([i for i in range(n,m+1) if str(i) == str(i)[::-1]])
Can anyone discover any other ways to make this code simpler in order to reduce its time complexity, as well as any potential missing conditions that my function may not have addressed?
Some recommendations to enhance the temporal complexity and mark on conditions that I haven't handled.
So here's an idea to build off of: For an n-digit number, there will be O(2^n) numbers less than n. For now, forget the lower bound. Checking each in turn will therefor take at least that long.
However, every palindrome is the repeat of a number of half that length - there can only be 2^(n/2) palindromes of length n. This is a much smaller number. Consider searching that way instead?
So for a number of the form abcd, there are two palindromes based off of it - abcddcba and abcdcba. You can therefor find all panidromes up to length 8 by instead starting from all numbers up to length 4 and finding their generated palindromes.
you can eliminate for loop and you can use recursion for eliminating time complexity
below is the code which has O(log10n) time complexity
def getFirstDigit(x) :
while (x >= 10) :
x //= 10
return x
def getCountWithSameStartAndEndFrom1(x) :
if (x < 10):
return x
tens = x // 10
res = tens + 9
firstDigit = getFirstDigit(x)
lastDigit = x % 10
if (lastDigit < firstDigit) :
res = res - 1
return res
def getCountWithSameStartAndEnd(start, end) :
return (getCountWithSameStartAndEndFrom1(end) -
getCountWithSameStartAndEndFrom1(start - 1))
I have a question about how to improve my simple Python file so that it does not exceed the time limit. My code should run in less than 2 seconds, but it takes a long time. I will be glad to know any advice about it. Code receives (n) as an integer from the user, then in n lines, I have to do the tasks. If the input is "Add" I have to add the given number and then arrange them from smallest to largest. If the input is "Ask", I have to return the asked index of added numbers.
This is
an example for inputs and outputs.
I guess the code works well for other examples, but the only problem is time ...
n = int(input())
def arrange(x):
for j in range(len(x)):
for i in range(len(x) - 1):
if x[i] > x[i + 1]:
x[i], x[i + 1] = x[i + 1], x[i]
tasks=[]
for i in range(n):
tasks.append(list(input().split()))
ref = []
for i in range(n):
if tasks[i][0] == 'Add':
ref.append(int(tasks[i][1]))
arrange(ref)
elif tasks[i][0] == 'Ask':
print(ref[int(tasks[i][1]) - 1])
For the given example, I get a "Time Limit Exceeded" Error.
First-off: Reimplementing list.sort will always be slower than just using it directly. If nothing else, getting rid of the arrange function and replacing the call to it with ref.sort() would improve performance (especially because Python's sorting algorithm is roughly O(n) when the input is largely sorted already, so you'll be reducing the work from the O(n**2) of your bubble-sorting arrange to roughly O(n), not just the O(n log n) of an optimized general purpose sort).
If that's not enough, note that list.sort is still theoretically O(n log n); if the list is getting large enough, that may cost more than it should. If so, take a look at the bisect module, to let you do the insertions with O(log n) lookup time (plus O(n) insertion time, but with very low constant factors) which might improve performance further.
Alternatively, if Ask operations are going to be infrequent, you might not sort at all when Adding, and only sort on demand when Ask occurs (possibly using a flag to indicate if it's already sorted so you don't call sort unnecessarily). That could make a meaningfully difference, especially if the inputs typically don't interleave Adds and Asks.
Lastly, in the realm of microoptimizations, you're needlessly wasting time on list copying and indexing you don't need to do, so stop doing it:
tasks=[]
for i in range(n):
tasks.append(input().split()) # Removed list() call; str.split already returns a list
ref = []
for action, value in tasks: # Don't iterate by index, iterate the raw list and unpack to useful
# names; it's meaningfully faster
if action == 'Add':
ref.append(int(value))
ref.sort()
elif action == 'Ask':
print(ref[int(value) - 1])
For me it runs in less than 0,005 seconds. Are you sure that you are measuring the right thing and you don't count in the time of giving the input for example?
python3 timer.py
Input:
7
Add 10
Add 2
Ask 1
Ask 2
Add 5
Ask 2
Ask 3
Output:
2
10
5
10
Elapsed time: 0.0033 seconds
My code:
import time
n = int(input('Input:\n'))
def arrange(x):
for j in range(len(x)):
for i in range(len(x) - 1):
if x[i] > x[i + 1]:
x[i], x[i + 1] = x[i + 1], x[i]
tasks=[]
for i in range(n):
tasks.append(list(input().split()))
tic = time.perf_counter()
ref = []
print('Output:')
for i in range(n):
if tasks[i][0] == 'Add':
ref.append(int(tasks[i][1]))
arrange(ref)
elif tasks[i][0] == 'Ask':
print(ref[int(tasks[i][1]) - 1])
toc = time.perf_counter()
print(f"Elapsed time: {toc - tic:0.4f} seconds")
Code that I want to minimize the runtime, it goes through an array of number and finds the max between the current max_product and the next product.
def max_pairwise_product(numbers):
n = len(numbers)
max_product = 0
for i in range(n):
for j in range(i+1,n):
max_product = max(max_product,numbers[i]*numbers[j])
return max_product
if __name__ == '__main__':
input_n = int(input())
input_numbers = [int(x) for x in input().split()]
print(max_pairwise_product(input_numbers))
Your code is trying to find the maximum product of any two non-identical elements of a numeric array. You are currently doing that by calculating each product. This algorithm has n²/2 calculations and comparisons, while all you actually need to do is much less:
We know from basic math that the two largest numbers in the original array will have the largest product. So all you need to do is:
Find the two largest integers in the array
multiply them.
You could do so by sorting the original array or just skimming through the array to find the two largest elements (which is a bit more tricky as it sounds because those two elements could have the same value but may not be the same element)
As a side note: In the future, please format your posts so that a reader may actually understand what your code does without going through hoops.
Sorting the numbers and multiplying the last two elements would give better time complexity than O(n^2).
Sort - O(nlogn)
Multiplication - O(1)
def max_pairwise_product(numbers):
n = len(numbers)
max_product = 0
numbers.sort()
if ((numbers[n-1] >0) and (numbers[n-2] >0)):
max_product = numbers[n-1]*numbers[n-2]
return max_product
if __name__ == '__main__':
input_n = int(input())
input_numbers = [int(x) for x in input().split()]
print(max_pairwise_product(input_numbers))
I have made a code that doesn't seem to be very efficient. It only calculates a few of the primes.
This is my code:
num=float(1)
a=1
while(num>0): # Create variable to hold the factors and add 1 and itself (all numbers have these factors)
factors = [1, num]
# For each possible factor
for i in range(2, int(num/4)+3):
# Check that it is a factor and that the factor and its corresponding factor are not already in the list
if float(num) % i == 0 and i not in factors and float(num/i) not in factors:
# Add i and its corresponding factor to the list
factors.append(i)
factors.append(float(num/i))
num=float(num)
number=num
# Takes an integer, returns true or false
number = float(number)
# Check if the only factors are 1 and itself and it is greater than 1
if (len(factors) == 2 and number > 1):
num2=2**num-1
factors2=[1, num]
for i in range(2, int(num2/4)+3):
# Check that it is a factor and that the factor and its corresponding factor are not already in the list
if float(num2) % i == 0 and i not in factors2 and float(num2/i) not in factors2:
# Add i and its corresponding factor to the list
factors2.append(i)
factors2.append(float(num2/i))
if(len(factors2)==2 and num2>1):
print(num2)
a=a+1
num=num+2
How can I make my code more efficient and be able to calculate the Mersenne Primes quicker. I would like to use the program to find any possible new perfect numbers.
All the solutions shown so far use bad algorithms, missing the point of Mersenne primes completely. The advantage of Mersenne primes is we can test their primality more efficiently than via brute force like other odd numbers. We only need to check an exponent for primeness and use a Lucas-Lehmer primality test to do the rest:
def lucas_lehmer(p):
s = 4
m = 2 ** p - 1
for _ in range(p - 2):
s = ((s * s) - 2) % m
return s == 0
def is_prime(number):
"""
the efficiency of this doesn't matter much as we're
only using it to test the primeness of the exponents
not the mersenne primes themselves
"""
if number % 2 == 0:
return number == 2
i = 3
while i * i <= number:
if number % i == 0:
return False
i += 2
return True
print(3) # to simplify code, treat first mersenne prime as a special case
for i in range(3, 5000, 2): # generate up to M20, found in 1961
if is_prime(i) and lucas_lehmer(i):
print(2 ** i - 1)
The OP's code bogs down after M7 524287 and #FrancescoBarban's code bogs down after M8 2147483647. The above code generates M18 in about 15 seconds! Here's up to M11, generated in about 1/4 of a second:
3
7
31
127
8191
131071
524287
2147483647
2305843009213693951
618970019642690137449562111
162259276829213363391578010288127
170141183460469231731687303715884105727
6864797660130609714981900799081393217269435300143305409394463459185543183397656052122559640661454554977296311391480858037121987999716643812574028291115057151
531137992816767098689588206552468627329593117727031923199444138200403559860852242739162502265229285668889329486246501015346579337652707239409519978766587351943831270835393219031728127
This program bogs down above M20, but it's not a particulary efficient implementation. It's simply not a bad algorithm.
import math
def is_it_prime(n):
# n is already a factor of itself
factors = [n]
#look for factors
for i in range(1, int(math.sqrt(n)) + 1):
#if i is a factor of n, append it to the list
if n%i == 0: factors.append(i)
else: pass
#if the list has more than 2 factors n is not prime
if len(factors) > 2: return False
#otherwise n is prime
else: return True
n = 1
while True:
#a prime P is a Mersenne prime if P = 2 ^ n - 1
test = (2 ** n) - 1
#if test is prime is also a Mersenne prime
if is_it_prime(test):
print(test)
else: pass
n += 1
Probably it will stuck to 2147483647, but you know, the next Mersenne prime is 2305843009213693951... so don't worry if it takes more time than you expected ;)
If you just want to check if a number is prime, then you do not need to find all its factors. You already know 1 and num are factors. As soon as you find a third factor then the number cannot be prime. You are wasting time looking for the fourth, fifth etc. factors.
A Mersenne number is of the form 2^n - 1, and so is always odd. Hence all its factors are odd. You can halve the run-time of your loop if you only look for odd factors: start at 3 and step 2 to the next possible factor.
Factors come in pairs, one larger than the square root and one smaller. Hence you only need to look for factors up to the square root, as #Francesco's code shows. That can give you a major time saving for the larger Mersenne numbers.
Putting these two points together, your loop should be more like:
#look for factors
for i in range(3, int(math.sqrt(n)) + 1, 2):
We're doing the classic problem of determining the number of ways that we can make change that amounts to Z given a set of coins.
For example, Amount=5 and Coins={1, 2, 3}. One way we can make 5 is {2, 3}.
The naive recursive solution has a time complexity of factorial time.
f(n) = n * f(n-1) = n!
My professor argued that it actually has a time complexity of O(2^n), because we only choose to use a coin or not. That intuitively makes sense. However how come my recurence doesn't work out to be O(2^n)?
EDIT:
My recurrence is as follows:
f(5, {1, 2, 3})
/ \ .....
f(4, {2, 3}) f(3, {1, 3}) .....
Notice how the branching factor decreases by 1 at every step.
Formally.
T(n) = n*F(n-1) = n!
The recurrence doesn't work out to what you expect it to work out to because it doesn't reflect the number of operations made by the algorithm.
If the algorithm decides for each coin whether to output it or not, then you can model its time complexity with the recurrence T(n) = 2*T(n-1) + O(1) with T(1)=O(1); the intuition is that for each coin you have two options---output the coin or not; this obviously solves to T(n)=O(2^n).
I too was trying to analyze the time complexity for the brute force which performs depth first search:
def countCombinations(coins, n, amount, k=0):
if amount == 0:
return 1
res = 0
for i in range(k, n):
if coins[k] <= amount:
remaining_amount = amount - coins[i] # considering this coin, try for remaining sum
# in next round include this coin too
res += countCombinations(coins, n, remaining_amount, i)
return res
but we can see that the coins which are used in one round is used again in the next round, so at least for 1st coin we have n items at each stage which is equivalent to permutation with repetition n^r for n items available to arrange into r positions at each stage.
ex: [1, 1, 1, 1]; sum = 4
This will generate a recursive tree where for first path we literally have solutions at each diverged subpath until we have the sum=0. so the time complexity is O(sum^n) ie for each stage in the path towards sum we have n different subpaths.
Note however there is another algorithm which uses take/not-take approach and at most there is 2 branch at a node in recursion tree. Hence the time complexity for this algorithm is O(2^(n*m))
ex: say coins = [1, 1] sum = 2 there are 11 nodes/points to visit in the recursion tree for 6 paths(leaves) then complexity is at most 2^(2*2) => 2^4 => 16 (Hence 11 nodes visiting for a max of 16 possibility is correct but little loose on upper bound).
def get_count(coins, n, sum):
if(n == 0): # no coins left, to try a combination that matches the sum
return 0
if(sum == 0): # no more sum left to match, means that we have completely co-incided with our trial
return 1 # (return success)
# don't-include the last coin in the sum calc so, leave it and try rest
excluded = get_count(coins, n-1, sum)
included = 0
if(coins[n-1] <= sum):
# include the last coin in the sum calc, so reduce by its quantity in the sum
# we assume here that n is constant ie, it is supplied in unlimited(we can choose same coin again and again),
included = get_count(coins, n, sum-coins[n-1])
return included+excluded