Coding Interval Point calculator more efficiently in python - python-3.x

I've been trying to code a function that takes variables a and b which are start and end points and calculate how far to go from a to b as a fraction between 0 and 1. (That fraction is variable x).
The code I have partially works, but it does not always work properly with negative numbers. For example if a = -2 and b = -1 and x = 1 the output should be -1 but I get -2.
I have been solving similar problems thus far using if statements but I don't want to continue like this. Is there a more elegant solution?
def interval_point(a, b, x):
"""Given parameters a, b and x. Takes three numbers and interprets a and b
as the start and end point of an interval, and x as a fraction
between 0 and 1 that returns how far to go towards b, starting at a"""
if a == b:
value = a
elif a < 0 and b < 0 and x == 0:
value = a
elif a < 0 and b < 0:
a1 = abs(a)
b1 = abs(b)
value = -((a1-b1) + ((a1-b1)*x))
else:
value = (a + (b-a)*x)
return(value)

I have played around with the maths somewhat and I have arrived at a much simpler way of solving the problem.
This is what the function now looks like:
def interval_point(a, b, x):
"""Given parameters a, b and x. Takes three numbers and interprets a and b
as the start and end point of an interval, and x as a fraction
between 0 and 1 that returns how far to go towards b, starting at a"""
return((b - a) * x + a)

Related

Output showing 0 for random() function

So, I have this battle scenario here:
def band_attack():
global new_ship
print('Bandits attack!')
s.sleep(1)
c = r.random()
if c < 0.5:
print('Bandits missed!')
elif 0.5 < c < 0.7:
c = r.random()
new_ship = new_ship - int(c)
print('Your ship was hit for', c, 'damage!')
print('Your ship now has', int(new_ship), 'health!')
else:
new_ship = new_ship - int(c)
print('Critical strike! You were hit for', c, 'damage!')
print('Your ship now has', int(new_ship), 'health!')
if new_ship <= 0:
print('You\'ve been destroyed!')
else:
Fight.band_fight()
Fight is the class holding all the battle functions, r is the random module, s is the time module, band_attack is a function where you attack.
I want the damage obviously to be whole numbers above 0, hence why I turn the random function output to an integer.
It should be outputting a number greater than 0, or if it is 0, should just be a miss, but I'm clearly missing something. Maybe someone else can figure out what I'm missing?
The call to random.random() will always return a floating-point number in the range [0.0, 1.0) as per the documentation.
When you cast the result to int (by calling the int(c)), you are asking for the integer part of that float which is always equal to zero for floats in that range.
There are two ways to fix this: either multiply the result of random.random() by 10 or use the random.randint(a, b), which returns a random integer N, such that a <= N <= b. You will need to adjust your conditions accordingly.
You mentioned in the comments that you are worried about seeding the random number generator when using random.randint(a, b) but since the seed function affects the module's random number generator itself all functions (randint, choice, randrange) will behave as expected.
The random() function from the random module (which I assume is what you named r) returns a float between 0 and 1. You can't pass a float into int(). The best alternative would be to use either randint(x, y) (where x and y denote the range in which you want your random damage to be), or stick to random() and mulitply it by the upper limit of that intended range.

Karatsuba recursive code is not working correctly

I want to implement Karatsuba multiplication algorithm in python.But it is not working completely.
The code is not working for the values of x or y greater than 999.For inputs below 1000,the program is showing correct result.It is also showing correct results on base cases.
#Karatsuba method of multiplication.
f = int(input()) #Inputs
e = int(input())
def prod(x,y):
r = str(x)
t = str(y)
lx = len(r) #Calculation of Lengths
ly = len(t)
#Base Case
if(lx == 1 or ly == 1):
return x*y
#Other Case
else:
o = lx//2
p = ly//2
a = x//(10*o) #Calculation of a,b,c and d.
b = x-(a*10*o) #The Calculation is done by
c = y//(10*p) #calculating the length of x and y
d = y-(c*10*p) #and then dividing it by half.
#Then we just remove the half of the digits of the no.
return (10**o)*(10**p)*prod(a,c)+(10**o)*prod(a,d)+(10**p)*prod(b,c)+prod(b,d)
print(prod(f,e))
I think there are some bugs in the calculation of a,b,c and d.
a = x//(10**o)
b = x-(a*10**o)
c = y//(10**p)
d = y-(c*10**p)
You meant 10 to the power of, but wrote 10 multiplied with.
You should train to find those kinds of bugs yourself. There are multiple ways to do that:
Do the algorithm manually on paper for specific inputs, then step through your code and see if it matches
Reduce the code down to sub-portions and see if their expected value matches the produced value. In your case, check for every call of prod() what the expected output would be and what it produced, to find minimal input values that produce erroneous results.
Step through the code with the debugger. Before every line, think about what the result should be and then see if the line produces that result.

make change in python (maximum recursion depth exceeded in comparison)

So I have a recursive solution to the make change problem that works sometimes. It is:
def change(n, c):
if (n == 0):
return 1
if (n < 0):
return 0;
if (c + 1 <= 0 and n >= 1):
return 0
return change(n, c - 1) + change(n - coins[c - 1], c);
where coins is my array of coins. For example [1,5,10,25]. n is the amount of coins, for example 1000, and c is the length of the coins array - 1. This solution works in some situations. But when I need it to run in under two seconds and I use:
coins: [1,5,10,25]
n: 1000
I get a:
RecursionError: maximum recursion depth exceeded in comparison
So my question is, what would be the best way to optimize this. Using some sort of flow control? I don't want to do something like.
# Set recursion limit
sys.setrecursionlimit(10000000000)
UPDATE:
I now have something like
def coinss(n, c):
if n == 0:
return 1
if n < 0:
return 0
nCombos = 0
for c in range(c, -1, -1):
nCombos += coinss(n - coins[c - 1], c)
return nCombos
but it takes forever. it'd be ideal to have this run under a second.
As suggested in the answers above you could use DP for a more optimal solution.
Also your conditional check -
if (c + 1 <= 0 and n >= 1)
should be
if (c <= 1 ):
as n will always be >=1 and c <= 1 will prevent any calculations if the number of coins is lesser than or equal to 1.
While using recursion you will always run into this. If you set the recursion limit higher, you may be able to use your algorithm on a bigger number, but you will always be limited. The recursion limit is there to keep you from getting a stack overflow.
The best way to solved for bigger change amounts would be to swap to an iterative approach. There are algorithms out there, wikipedia:
https://en.wikipedia.org/wiki/Change-making_problem
Note that you have a bug here:
if (c + 1 <= 0 and n >= 1):
is like
if (c <= -1 and n >= 1):
So c can be 0 and pass to the next step where you pass c-1 to the index, which works because python doesn't mind negative indexes but still false (coins[-1] yields 25), so your solution sometimes prints 1 combination too much.
I've rewritten your algorithm with recursive and stack approaches:
Recursive (fixed, no need for c at init thanks to an internal recursive method, but still overflows the stack):
coins = [1,5,10,25]
def change(n):
def change_recurse(n, c):
if n == 0:
return 1
if n < 0:
return 0;
if c <= 0:
return 0
return change_recurse(n, c - 1) + change_recurse(n - coins[c - 1], c)
return change_recurse(n,len(coins))
iterative/stack approach (not dynamic programming), doesn't recurse, just uses a "stack" to store the computations to perform:
def change2(n):
def change_iter(stack):
result = 0
# continue while the stack isn't empty
while stack:
# process one computation
n,c = stack.pop()
if n == 0:
# one solution found, increase counter
result += 1
if n > 0 and c > 0:
# not found, request 2 more computations
stack.append((n, c - 1))
stack.append((n - coins[c - 1], c))
return result
return change_iter([(n,len(coins))])
Both methods return the same values for low values of n.
for i in range(1,200):
a,b = change(i),change2(i)
if a != b:
print("error",i,a,b)
the code above runs without any error prints.
Now print(change2(1000)) takes a few seconds but prints 142511 without blowing the stack.

How to implement Frobenius pseudoprime algorithm?

Someone told me that the Frobenius pseudoprime algorithm take three times longer to run than the Miller–Rabin primality test but has seven times the resolution. So then if one where to run the former ten times and the later thirty times, both would take the same time to run, but the former would provide about 233% more analyse power. In trying to find out how to perform the test, the following paper was discovered with the algorithm at the end:
A Simple Derivation for the Frobenius Pseudoprime Test
There is an attempt at implementing the algorithm below, but the program never prints out a number. Could someone who is more familiar with the math notation or algorithm verify what is going on please?
Edit 1: The code below has corrections added, but the implementation for compute_wm_wm1 is missing. Could someone explain the recursive definition from an algorithmic standpoint? It is not "clicking" for me.
Edit 2: The erroneous code has been removed, and an implementation of the compute_wm_wm1 function has been added below. It appears to work but may require further optimization to be practical.
from random import SystemRandom
from fractions import gcd
random = SystemRandom().randrange
def find_prime_number(bits, test):
number = random((1 << bits - 1) + 1, 1 << bits, 2)
while True:
for _ in range(test):
if not frobenius_pseudoprime(number):
break
else:
return number
number += 2
def frobenius_pseudoprime(integer):
assert integer & 1 and integer >= 3
a, b, d = choose_ab(integer)
w1 = (a ** 2 * extended_gcd(b, integer)[0] - 2) % integer
m = (integer - jacobi_symbol(d, integer)) >> 1
wm, wm1 = compute_wm_wm1(w1, m, integer)
if w1 * wm != 2 * wm1 % integer:
return False
b = pow(b, (integer - 1) >> 1, integer)
return b * wm % integer == 2
def choose_ab(integer):
a, b = random(1, integer), random(1, integer)
d = a ** 2 - 4 * b
while is_square(d) or gcd(2 * d * a * b, integer) != 1:
a, b = random(1, integer), random(1, integer)
d = a ** 2 - 4 * b
return a, b, d
def is_square(integer):
if integer < 0:
return False
if integer < 2:
return True
x = integer >> 1
seen = set([x])
while x * x != integer:
x = (x + integer // x) >> 1
if x in seen:
return False
seen.add(x)
return True
def extended_gcd(n, d):
x1, x2, y1, y2 = 0, 1, 1, 0
while d:
n, (q, d) = d, divmod(n, d)
x1, x2, y1, y2 = x2 - q * x1, x1, y2 - q * y1, y1
return x2, y2
def jacobi_symbol(n, d):
j = 1
while n:
while not n & 1:
n >>= 1
if d & 7 in {3, 5}:
j = -j
n, d = d, n
if n & 3 == 3 == d & 3:
j = -j
n %= d
return j if d == 1 else 0
def compute_wm_wm1(w1, m, n):
a, b = 2, w1
for shift in range(m.bit_length() - 1, -1, -1):
if m >> shift & 1:
a, b = (a * b - w1) % n, (b * b - 2) % n
else:
a, b = (a * a - 2) % n, (a * b - w1) % n
return a, b
print('Probably prime:\n', find_prime_number(300, 10))
You seem to have misunderstood the algorithm completely due to not being familiar with the notation.
def frobenius_pseudoprime(integer):
assert integer & 1 and integer >= 3
a, b, d = choose_ab(integer)
w1 = (a ** 2 // b - 2) % integer
That comes from the line
W0 ≡ 2 (mod n) and W1 ≡ a2b−1 − 2 (mod n)
But the b-1 doesn't mean 1/b here, but the modular inverse of b modulo n, i.e. an integer c with b·c ≡ 1 (mod n). You can most easily find such a c by continued fraction expansion of b/n or, equivalently, but with slightly more computation, by the extended Euclidean algorithm. Since you're probably not familiar with continued fractions, I recommend the latter.
m = (integer - d // integer) // 2
comes from
n − (∆/n) = 2m
and misunderstands the Jacobi symbol as a fraction/division (admittedly, I have displayed it here even more like a fraction, but since the site doesn't support LaTeX rendering, we'll have to make do).
The Jacobi symbol is a generalisation of the Legendre symbol - denoted identically - which indicates whether a number is a quadratic residue modulo an odd prime (if n is a quadratic residue modulo p, i.e. there is a k with k^2 ≡ n (mod p) and n is not a multiple of p, then (n/p) = 1, if n is a multiple of p, then (n/p) = 0, otherwise (n/p) = -1). The Jacobi symbol lifts the restriction that the 'denominator' be an odd prime and allows arbitrary odd numbers as 'denominators'. Its value is the product of the Legendre symbols with the same 'numerator' for all primes dividing n (according to multiplicity). More on that, and how to compute Jacobi symbols efficiently in the linked article.
The line should correctly read
m = (integer - jacobi_symbol(d,integer)) // 2
The following lines I completely fail to understand, logically, here should follow the calculation of
Wm and Wm+1 using the recursion
W2j ≡ Wj2 − 2 (mod n)
W2j+1 ≡ WjWj+1 − W1 (mod n)
An efficient method of using that recursion to compute the required values is given around formula (11) of the PDF.
w_m0 = w1 * 2 // m % integer
w_m1 = w1 * 2 // (m + 1) % integer
w_m2 = (w_m0 * w_m1 - w1) % integer
The remainder of the function is almost correct, except of course that it now gets the wrong data due to earlier misunderstandings.
if w1 * w_m0 != 2 * w_m2:
The (in)equality here should be modulo integer, namely if (w1*w_m0 - 2*w_m2) % integer != 0.
return False
b = pow(b, (integer - 1) // 2, integer)
return b * w_m0 % integer == 2
Note, however, that if n is a prime, then
b^((n-1)/2) ≡ (b/n) (mod n)
where (b/n) is the Legendre (or Jacobi) symbol (for prime 'denominators', the Jacobi symbol is the Legendre symbol), hence b^((n-1)/2) ≡ ±1 (mod n). So you could use that as an extra check, if Wm is not 2 or n-2, n can't be prime, nor can it be if b^((n-1)/2) (mod n) is not 1 or n-1.
Probably computing b^((n-1)/2) (mod n) first and checking whether that's 1 or n-1 is a good idea, since if that check fails (that is the Euler pseudoprime test, by the way) you don't need the other, no less expensive, computations anymore, and if it succeeds, it's very likely that you need to compute it anyway.
Regarding the corrections, they seem correct, except for one that made a glitch I previously overlooked possibly worse:
if w1 * wm != 2 * wm1 % integer:
That applies the modulus only to 2 * wm1.
Concerning the recursion for the Wj, I think it is best to explain with a working implementation, first in toto for easy copy and paste:
def compute_wm_wm1(w1,m,n):
a, b = 2, w1
bits = int(log(m,2)) - 2
if bits < 0:
bits = 0
mask = 1 << bits
while mask <= m:
mask <<= 1
mask >>= 1
while mask > 0:
if (mask & m) != 0:
a, b = (a*b-w1)%n, (b*b-2)%n
else:
a, b = (a*a-2)%n, (a*b-w1)%n
mask >>= 1
return a, b
Then with explanations in between:
def compute_wm_wm1(w1,m,n):
We need the value of W1, the index of the desired number, and the number by which to take the modulus as input. The value W0 is always 2, so we don't need that as a parameter.
Call it as
wm, wm1 = compute_wm_wm1(w1,m,integer)
in frobenius_pseudoprime (aside: not a good name, most of the numbers returning True are real primes).
a, b = 2, w1
We initialise a and b to W0 and W1 respectively. At each point, a holds the value of Wj and b the value of Wj+1, where j is the value of the bits of m so far consumed. For example, with m = 13, the values of j, a and b develop as follows:
consumed remaining j a b
1101 0 w_0 w_1
1 101 1 w_1 w_2
11 01 3 w_3 w_4
110 1 6 w_6 w_7
1101 13 w_13 w_14
The bits are consumed left-to-right, so we have to find the first set bit of m and place our 'pointer' right before it
bits = int(log(m,2)) - 2
if bits < 0:
bits = 0
mask = 1 << bits
I subtracted a bit from the computed logarithm just to be entirely sure that we don't get fooled by a floating point error (by the way, using log limits you to numbers of at most 1024 bits, about 308 decimal digits; if you want to treat larger numbers, you have to find the base-2 logarithm of m in a different way, using log was the simplest way, and it's just a proof of concept, so I used that here).
while mask <= m:
mask <<= 1
Shift the mask until it's greater than m,so the set bit points just before m's first set bit. Then shift one position back, so we point at the bit.
mask >>= 1
while mask > 0:
if (mask & m) != 0:
a, b = (a*b-w1)%n, (b*b-2)%n
If the next bit is set, the value of the initial portion of consumed bits of m goes from j to 2*j+1, so the next values of the W sequence we need are W2j+1 for a and W2j+2 for b. By the above recursion formula,
W_{2j+1} = W_j * W_{j+1} - W_1 (mod n)
W_{2j+2} = W_{j+1}^2 - 2 (mod n)
Since a was Wj and b was Wj+1, a becomes (a*b - W_1) % n and b becomes (b * b - 2) % n.
else:
a, b = (a*a-2)%n, (a*b-w1)%n
If the next bit is not set, the value of the initial portion of consumed bits of m goes from j to 2*j, so a becomes W2j = (Wj2 - 2) (mod n), and b becomes
W2j+1 = (Wj * Wj+1 - W1) (mod n).
mask >>= 1
Move the pointer to the next bit. When we have moved past the final bit, mask becomes 0 and the loop ends. The initial portion of consumed bits of m is now all of m's bits, so the value is of course m.
Then we can
return a, b
Some additional remarks:
def find_prime_number(bits, test):
while True:
number = random(3, 1 << bits, 2)
for _ in range(test):
if not frobenius_pseudoprime(number):
break
else:
return number
Primes are not too frequent among the larger numbers, so just picking random numbers is likely to take a lot of attempts to hit one. You will probably find a prime (or probable prime) faster if you pick one random number and check candidates in order.
Another point is that such a test as the Frobenius test is disproportionally expensive to find that e.g. a multiple of 3 is composite. Before using such a test (or a Miller-Rabin test, or a Lucas test, or an Euler test, ...), you should definitely do a bit of trial division to weed out most of the composites and do the work only where it has a fighting chance of being worth it.
Oh, and the is_square function isn't prepared to deal with arguments less than 2, divide-by-zero errors lurk there,
def is_square(integer):
if integer < 0:
return False
if integer < 2:
return True
x = integer // 2
should help.

Python 3 integer division. How to make math operators consistent with C

I need to port quite a few formulas from C to Python and vice versa. What is the best way to make sure that nothing breaks in the process?
I am primarily worried about automatic int/int = float conversions.
You could use the // operator. It performs an integer division, but it's not quite what you'd expect from C:
A quote from here:
The // operator performs a quirky kind of integer division. When the
result is positive, you can think of
it as truncating (not rounding) to 0
decimal places, but be careful with
that.
When integer-dividing negative numbers, the // operator rounds “up”
to the nearest integer. Mathematically
speaking, it’s rounding “down” since
−6 is less than −5, but it could trip
you up if you were expecting it to
truncate to −5.
For example, -11 // 2 in Python returns -6, where -11 / 2 in C returns -5.
I'd suggest writing and thoroughly unit-testing a custom integer division function that "emulates" C behaviour.
The page I linked above also has a link to PEP 238 which has some interesting background information about division and the changes from Python 2 to 3. There are some suggestions about what to use for integer division, like divmod(x, y)[0] and int(x/y) for positive numbers, perhaps you'll find more useful things there.
In C:
-11/2 = -5
In Python:
-11/2 = -5.5
And also in Python:
-11//2 = -6
To achieve C-like behaviour, write int(-11/2) in Python. This will evaluate to -5.
Some ways to compute integer division with C semantics are as follows:
def div_c0(a, b):
if (a >= 0) != (b >= 0) and a % b:
return a // b + 1
else:
return a // b
def div_c1(a, b):
q, r = a // b, a % b
if (a >= 0) != (b >= 0) and r:
return q + 1
else:
return q
def div_c2(a, b):
q, r = divmod(a, b)
if (a >= 0) != (b >= 0) and r:
return q + 1
else:
return q
def mod_c(a, b):
return (a % b if b >= 0 else a % -b) if a >= 0 else (-(-a % b) if b >= 0 else a % b)
def div_c3(a, b):
r = mod_c(a, b)
return (a - r) // b
With timings:
import itertools
n = 100
l = [x for x in range(-n, n + 1)]
ll = [(a, b) for a, b in itertools.product(l, repeat=2) if b]
funcs = div_c0, div_c1, div_c2, div_c3
for func in funcs:
correct = all(func(a, b) == funcs[0](a, b) for a, b in ll)
print(f"{func.__name__} correct:{correct} ", end="")
%timeit [func(a, b) for a, b in ll]
# div_c0 correct:True 100 loops, best of 5: 10.3 ms per loop
# div_c1 correct:True 100 loops, best of 5: 11.5 ms per loop
# div_c2 correct:True 100 loops, best of 5: 13.2 ms per loop
# div_c3 correct:True 100 loops, best of 5: 15.4 ms per loop
Indicating the first approach to be the fastest.
For implementing C's % using Python, see here.
In the opposite direction:
Since Python 3 divmod (or //) integer division requires the remainder to have the same sign as divisor at non-zero remainder case, it's inconsistent with many other languages (quote from 1.4. Integer Arithmetic).
To have your "C-like" result same as Python, you should compare the remainder result with divisor (suggestion: by xor on sign bits equals to 1, or multiplication with negative result), and in case it's different, add the divisor to the remainder, and subtract 1 from the quotient.
// Python Divmod requires a remainder with the same sign as the divisor for
// a non-zero remainder
// Assuming isPyCompatible is a flag to distinguish C/Python mode
isPyCompatible *= (int)remainder;
if (isPyCompatible)
{
int32_t xorRes = remainder ^ divisor;
int32_t andRes = xorRes & ((int32_t)((uint32_t)1<<31));
if (andRes)
{
remainder += divisor;
quotient -= 1;
}
}
(Credit to Gawarkiewicz M. for pointing this out.)
You will need to know what the formula does, and understand both the C implementation and how to implement it in Python. But unless you are doing integer maths it should be quite similar, and if you are doing integer maths, the question is why. :)
Integer maths are either done because of some specific purpose, often related to computers, or because it's faster than floats when doing massive computations, like Fractint does for fractals, and in that case Python is usually not the right choice. ;)

Resources