I'm trying to find the number of palindromes in a certain range using the Python code below:
def test(n,m):
return len([i for i in range(n,m+1) if str(i) == str(i)[::-1]])
Can anyone discover any other ways to make this code simpler in order to reduce its time complexity, as well as any potential missing conditions that my function may not have addressed?
Some recommendations to enhance the temporal complexity and mark on conditions that I haven't handled.
So here's an idea to build off of: For an n-digit number, there will be O(2^n) numbers less than n. For now, forget the lower bound. Checking each in turn will therefor take at least that long.
However, every palindrome is the repeat of a number of half that length - there can only be 2^(n/2) palindromes of length n. This is a much smaller number. Consider searching that way instead?
So for a number of the form abcd, there are two palindromes based off of it - abcddcba and abcdcba. You can therefor find all panidromes up to length 8 by instead starting from all numbers up to length 4 and finding their generated palindromes.
you can eliminate for loop and you can use recursion for eliminating time complexity
below is the code which has O(log10n) time complexity
def getFirstDigit(x) :
while (x >= 10) :
x //= 10
return x
def getCountWithSameStartAndEndFrom1(x) :
if (x < 10):
return x
tens = x // 10
res = tens + 9
firstDigit = getFirstDigit(x)
lastDigit = x % 10
if (lastDigit < firstDigit) :
res = res - 1
return res
def getCountWithSameStartAndEnd(start, end) :
return (getCountWithSameStartAndEndFrom1(end) -
getCountWithSameStartAndEndFrom1(start - 1))
Related
I made a program to find primes below a given number.
number = int(input("Enter number: "))
prime_numbers = [2] # First prime is needed.
for number_to_be_checked in range(3, number + 1):
square_root = number_to_be_checked ** 0.5
for checker in prime_numbers: # Checker will become
# every prime number below the 'number_to_be_checked'
# variable because we are adding all the prime numbers
# in the 'prime_numbers' list.
if checker > square_root:
prime_numbers.append(number_to_be_checked)
break
elif number_to_be_checked % checker == 0:
break
print(prime_numbers)
This program checks every number below the number given as the input. But primes are of the form 6k ± 1 only. Therefore, instead of checking all the numbers, I defined a generator that generates all the numbers of form 6k ± 1 below the number given as the input. (I added 3 also in the prime_numbers list while initializing it as 2,3 cannot be of the form 6k ± 1)
def potential_primes(number: int) -> int:
"""Generate the numbers potential to be prime"""
# Prime numbers are always of the form 6k ± 1.
number_for_function = number // 6
for k in range(1, number_for_function + 1):
yield 6*k - 1
yield 6*k + 1
Obviously, the program should have been much faster because I am checking comparatively many less numbers. But, counterintuitively the program is slower than before. What could be the reason behind this?
In every six numbers, three are even and one is a multiple of 3. The other two are 6-coprime, so are potentially prime:
6k+0 6k+1 6k+2 6k+3 6k+4 6k+5
even even even
3x 3x
For the three evens your primality check uses only one division (by 2) and for the 4th number, two divisions. In all, five divisions that you seek to avoid.
But each call to a generator has its cost too. If you just replace the call to range with the call to create your generator, but leave the other code as is(*), you are not realizing the full savings potential.
Why? Because (*)if that's the case, while you indeed test only 1/3 of the numbers now, you still test each of them by 2 and 3. Needlessly. And apparently the cost of generator use is too high.
The point to this technique known as wheel factorization is to not test the 6-coprime (in this case) numbers by the primes which are already known to not be their divisors, by construction.
Thus, you must start with e.g. prime_numbers = [5,7] and use it in your divisibility testing loop, not all primes, which start with 2 and 3, which you do not need.
Using nested for loop along with square root will be heavy on computation, rather look at Prime Sieve Algorithm which is much faster but does take some memory.
One way to use the 6n±1 idea is to alternate step sizes in the main loop by stepping 2 then 4. My Python is not good, so this is pseudocode:
function listPrimes(n)
// Deal with low numbers.
if (n < 2) return []
if (n = 2) return [2]
if (n = 3) return [2, 3]
// Main loop
primeList ← [2, 3]
limit ← 1 + sqrt(n) // Calculate square root once.
index ← 5 // We have checked 2 and 3 already.
step ← 2 // Starting step value: 5 + 2 = 7.
while (index <= limit) {
if (isPrime(index)) {
primeList.add(index)
}
index ← index + step
step ← 6 - step // Alternate steps of 2 and 4
}
return primeList
end function
I have a question about how to improve my simple Python file so that it does not exceed the time limit. My code should run in less than 2 seconds, but it takes a long time. I will be glad to know any advice about it. Code receives (n) as an integer from the user, then in n lines, I have to do the tasks. If the input is "Add" I have to add the given number and then arrange them from smallest to largest. If the input is "Ask", I have to return the asked index of added numbers.
This is
an example for inputs and outputs.
I guess the code works well for other examples, but the only problem is time ...
n = int(input())
def arrange(x):
for j in range(len(x)):
for i in range(len(x) - 1):
if x[i] > x[i + 1]:
x[i], x[i + 1] = x[i + 1], x[i]
tasks=[]
for i in range(n):
tasks.append(list(input().split()))
ref = []
for i in range(n):
if tasks[i][0] == 'Add':
ref.append(int(tasks[i][1]))
arrange(ref)
elif tasks[i][0] == 'Ask':
print(ref[int(tasks[i][1]) - 1])
For the given example, I get a "Time Limit Exceeded" Error.
First-off: Reimplementing list.sort will always be slower than just using it directly. If nothing else, getting rid of the arrange function and replacing the call to it with ref.sort() would improve performance (especially because Python's sorting algorithm is roughly O(n) when the input is largely sorted already, so you'll be reducing the work from the O(n**2) of your bubble-sorting arrange to roughly O(n), not just the O(n log n) of an optimized general purpose sort).
If that's not enough, note that list.sort is still theoretically O(n log n); if the list is getting large enough, that may cost more than it should. If so, take a look at the bisect module, to let you do the insertions with O(log n) lookup time (plus O(n) insertion time, but with very low constant factors) which might improve performance further.
Alternatively, if Ask operations are going to be infrequent, you might not sort at all when Adding, and only sort on demand when Ask occurs (possibly using a flag to indicate if it's already sorted so you don't call sort unnecessarily). That could make a meaningfully difference, especially if the inputs typically don't interleave Adds and Asks.
Lastly, in the realm of microoptimizations, you're needlessly wasting time on list copying and indexing you don't need to do, so stop doing it:
tasks=[]
for i in range(n):
tasks.append(input().split()) # Removed list() call; str.split already returns a list
ref = []
for action, value in tasks: # Don't iterate by index, iterate the raw list and unpack to useful
# names; it's meaningfully faster
if action == 'Add':
ref.append(int(value))
ref.sort()
elif action == 'Ask':
print(ref[int(value) - 1])
For me it runs in less than 0,005 seconds. Are you sure that you are measuring the right thing and you don't count in the time of giving the input for example?
python3 timer.py
Input:
7
Add 10
Add 2
Ask 1
Ask 2
Add 5
Ask 2
Ask 3
Output:
2
10
5
10
Elapsed time: 0.0033 seconds
My code:
import time
n = int(input('Input:\n'))
def arrange(x):
for j in range(len(x)):
for i in range(len(x) - 1):
if x[i] > x[i + 1]:
x[i], x[i + 1] = x[i + 1], x[i]
tasks=[]
for i in range(n):
tasks.append(list(input().split()))
tic = time.perf_counter()
ref = []
print('Output:')
for i in range(n):
if tasks[i][0] == 'Add':
ref.append(int(tasks[i][1]))
arrange(ref)
elif tasks[i][0] == 'Ask':
print(ref[int(tasks[i][1]) - 1])
toc = time.perf_counter()
print(f"Elapsed time: {toc - tic:0.4f} seconds")
Hello I have been working on https://leetcode.com/problems/2-keys-keyboard/ and came upon this dynamic programming question.
You start with an 'A' on a blank page and you get a number n when you are done you should have n times 'A' on the page. The catch is you are allowed only 2 operations copy (and you can only copy the total amount of A's currently on the page) and paste --> find the minimum number of operations to get n 'A' on the page.
I solved this problem but then found a better solution in the discussion section of leetcode --> and I can't figure out it's time complexity.
def minSteps(self, n):
factors = 0
i=2
while i <= n:
while n % i == 0:
factors += i
n /= i
i+=1
return factors
The way this works is i is never gonna be bigger than the biggest prime factor p of n so the outer loop is O(p) and the inner while loop is basically O(logn) since we are dividing n /= i at each iteration.
But the way I look at it we are doing O(logn) divisions in total for the inner loop while the outer loop is O(p) so using aggregate analysis this function is basically O(max(p, logn)) is this correct ?
Any help is welcome.
Your reasoning is correct: O(max(p, logn)) gives the time complexity, assuming that arithmetic operations take constant time. This assumption is not true for arbitrary large n, that would not fit in the machine's fixed-size number storage, and where you would need Big-Integer operations that have non-constant time complexity. But I will ignore that.
It is still odd to express the complexity in terms of p when that is not the input (but derived from it). Your input is only n, so it makes sense to express the complexity in terms of n alone.
Worst Case
Clearly, when n is prime, the algorithm is O(n) -- the inner loop never iterates.
For a prime n, the algorithm will take more time than for n+1, as even the smallest factor of n+1 (i.e. 2), will halve the number of iterations of the outer loop, and yet only add 1 block of constant work in the inner loop.
So O(n) is the worst case.
Average Case
For the average case, we note that the division of n happens just as many times as n has prime factors (counting duplicates). For example, for n = 12, we have 3 divisions, as n = 2·2·3
The average number of prime factors for 1 < n < x approaches loglogn + B, where B is some constant. So we could say the average time complexity for the total execution of the inner loop is O(loglogn).
We need to add to that the execution of the outer loop. This corresponds to the average greatest prime factor. For 1 < n < x this average approaches C.n/logn, and so we have:
O(n/logn + loglogn)
Now n/logn is the more important term here, so this simplifies to:
O(n/logn)
I have this python program which computes the "Square Free Numbers" of a given number. I'm facing problem regarding the time complexity that is I'm getting the error as "Time Limit Exceeded" in an online compiler.
number = int(input())
factors = []
perfectSquares = []
count = 0
total_len = 0
# Find All the Factors of the given number
for i in range(1, number):
if number%i == 0:
factors.append(i)
# Find total number of factors
total_len = len(factors)
for items in factors:
for i in range(1,total_len):
# Eleminate perfect square numbers
if items == i * i:
if items == 1:
factors.remove(items)
count += 1
else:
perfectSquares.append(items)
factors.remove(items)
count += 1
# Eleminate factors that are divisible by the perfect squares
for i in factors:
for j in perfectSquares:
if i%j == 0:
count +=1
# Print Total Square Free numbers
total_len -= count
print(total_len)
How can I reduce the time complexity of this program? That is how can I reduce the for loops so the program gets executed with a smaller time complexity?
Algorithmic Techniques for Reducing Time Complexity(TC) of a python code.
In order to reduce time complexity of a code, it's very much necessary to reduce the usage of loops whenever and wherever possible.
I'll divide your code's logic part into 5 sections and suggest optimization in each one of them.
Section 1 - Declaration of Variables and taking input
number = int(input())
factors = []
perfectSquares = []
count = 0
total_len = 0
You can easily omit declaration of perfect squares, count and total_length, as they aren't needed, as explained further. This will reduce both Time and Space complexities of your code.
Also, you can use Fast IO, in order to speed up INPUTS and OUTPUTS
This is done by using 'stdin.readline', and 'stdout.write'.
Section 2 - Finding All factors
for i in range(1, number):
if number%i == 0:
factors.append(i)
Here, you can use List comprehension technique to create the factor list, due to the fact that List comprehension is faster than looping statements.
Also, you can just iterate till square root of the Number, instead of looping till number itself, thereby reducing time complexity exponentially.
Above code section guns down to...
After applying '1' hack
factors = [for i in range(1, number) if number%i == 0]
After applying '2' hack - Use from_iterable to store more than 1 value in each iteration in list comprehension
factors = list( chain.from_iterable(
(i, int(number/i)) for i in range(2, int(number**0.5)+1)
if number%i == 0
))
Section 3 - Eliminating Perfect Squares
# Find total number of factors
total_len = len(factors)
for items in factors:
for i in range(1,total_len):
# Eleminate perfect square numbers
if items == i * i:
if items == 1:
factors.remove(items)
count += 1
else:
perfectSquares.append(items)
factors.remove(items)
count += 1
Actually you can completely omit this part, and just add additional condition to the Section 2, namely ... type(i**0.5) != int, to eliminate those numbers which have integer square roots, hence being perfect squares themselves.
Implement as follows....
factors = list( chain.from_iterable(
(i, int(number/i)) for i in range(2, int(number**0.5)+1)
if number%i == 0 and type(i**0.5) != int
))
Section 4 - I think this Section isn't needed because Square Free Numbers doesn't have such Restriction
Section 5 - Finalizing Count, Printing Count
There's absolutely no need of counter, you can just compute length of factors list, and use it as Count.
OPTIMISED CODES
Way 1 - Little Faster
number = int(input())
# Find Factors of the given number
factors = []
for i in range(2, int(number**0.5)+1):
if number%i == 0 and type(i**0.5) != int:
factors.extend([i, int(number/i)])
print([1] + factors)
Way 2 - Optimal Programming - Very Fast
from itertools import chain
from sys import stdin, stdout
number = int(stdin.readline())
factors = list( chain.from_iterable(
(i, int(number/i)) for i in range(2, int(number**0.5)+1)
if number%i == 0 and type(i**0.5) != int
))
stdout.write(', '.join(map(str, [1] + factors)))
First of all, you only need to check for i in range(1, number/2):, since number/2 + 1 and greater cannot be factors.
Second, you can compute the number of perfect squares that could be factors in sublinear time:
squares = []
for i in range(1, math.floor(math.sqrt(number/2))):
squares.append(i**2)
Third, you can search for factors and when you find one, check that it is not divisible by a square, and only then add it to the list of factors.
This approach will save you all the time of your for items in factors nested loop block, as well as the next block. I'm not sure if it will definitely be faster, but it is less wasteful.
I used the code provided in the answer above but it didn't give me the correct answer. This actually computes the square free list of factors of a number.
number = int(input())
factors = [
i for i in range(2, int(number/2)+1)
if number%i == 0 and int(int(math.sqrt(i))**2)!=i
]
print([1] + factors)
So I am working on a problem which need me to get factors of a certain number. So as always I am using the module % in order to see if a number is divisible by a certain number and is equal to zero. But when ever I am trying to do this I keep getting an error saying ZeroDivisionError . I tried adding a block of code like this so python does not start counting from zero instead it starts to count from one for potenial in range(number + 1): But this does not seem to work. Below is the rest of my code any help will be appreciated.
def Factors(number):
factors = []
for potenial in range(number + 1):
if number % potenial == 0:
factors.append(potenial)
return factors
In your for loop you are iterating from 0 (range() assumes starting number to be 0 if only 1 argument is given) up to "number". There is a ZeroDivisionError since you are trying to calculate number modulo 0 (number % 0) at the start of the for loop. When calculating the modulo, Python tries to divide number by 0 causing the ZeroDivisionError. Here is the corrected code (fixed the indentation):
def get_factors(number):
factors = []
for potential in range(1, number + 1):
if number % potential == 0:
factors.append(potential)
return factors
However, there are betters ways of calculating factors. For example, you can iterate only up to sqrt(n) where n is the number and then calculate "factor pairs" e.g. if 3 is a factor of 15 then 15/3 which is 5 is also a factor of 15.
I encourage you to try an implement a more efficient algorithm.
Stylistic note: According to PEP 8, function names should be lowercase with words separated by underscores. Uppercase names generally indicate class definitions.