How did they come up with the equation for calculating module of 2 numbers "a%b = a-b*(a//b)" - python-3.x

if we try different example you can see it satisfy the equation
print(13 % -4)
print(-10 % 9)
the question is where did they get the equation
a%b = b*(a//b)-a from

It is the intuitive way to define a "remainder" operation, and also it's actually in the python documentation, you just have to tweak the equation a littlebit.
The floor division and modulo operators are connected by the following
identity: x == (x//y)*y + (x%y). Floor division and modulo are also
connected with the built-in function divmod(): divmod(x, y) == (x//y,
x%y).
So subtracting (x//y)*y from x == (x//y)*y + (x%y) gives you x%y == x - (x//y)*y.

Related

Solve for x using python

I came across a problem. One string is taken as input say
input_string = "12345 + x = x * 5 + (1+x)* x + (1+18/100)"
And get output of x using python. I am not able to figure out logic for this.
Here is a complete SymPy example for your input:
from sympy import Symbol, solve, Eq
from sympy.parsing.sympy_parser import parse_expr
input_string = "12345 + x = x * 5 + (1+x)* x + (1+18/100)"
x = Symbol('x', real=True)
lhs = parse_expr(input_string.split('=')[0], local_dict={'x':x})
rhs = parse_expr(input_string.split('=')[1], local_dict={'x':x})
print(lhs, "=", rhs)
sol = solve(Eq(lhs, rhs), x)
print(sol)
print([s.evalf() for s in sol])
This outputs:
x + 12345 = x*(x + 1) + 5*x + 59/50
[-5/2 + 9*sqrt(15247)/10, -9*sqrt(15247)/10 - 5/2]
[108.630868798908, -113.630868798908]
Note that solve() gives a list of solutions. And that SymPy normally does not evaluate fractions and square roots, as it prefers solutions without loss of precision. evalf() evaluates a float value for these expressions.
Well, that example shows a quadratic equation which may have no solutions, one solution, or two solutions. You would have to rearrange the terms symbolically to come to
input_string = "x**2 + 5*x - 12345 + (118/100)"
But that means you need to implement rules for multiplication, addition, subtraction and potentially division. At least for Python there is a library called SymPy which can parse such strings and provide an expression that you can evaluate and even solve.

Create a Recursive Function as Well as a Closed Function Definition

The goal of this assignment is to take the recurrence relation given at the bottom, and then create a recursive function under recFunc(n), as well as a closed function definition underneath nonRecFunc(n). A closed function means our function should solely depend on n, and that its output should
match the recursive function's exactly. Then, find the value for n = 15 and n = 20, and use it as instructed below. You should probably need to use a characteristic equation to solve this problem.
What is the value for nonRecFunc(20) (divided by) nonRecFunc(15), rounded to the nearest integer.
Problem:
Solve the recurrence relation a_n = 12a_n-1 - 32a_n-2 with initial conditions a_0 = 1 and a_1 = 4.
I am confused as to how I should attack this problem and how I can use recursion to solve the issue.
def recFunc(n):
if n == 0:
return 1
elif n == 1:
return 2
else:
return recFunc(n - 1) + 6 * recFunc(n - 2)
def nonRecFunc(n):
return 4/5 * 3 ** n + 1/5 * (-2) ** n
for i in range(0,10):
print(recFunc(i))
print(nonRecFunc(i))
print()
As mentioned in a my comment above, I leave the recursive solution to you.
For the more mathematical question of the non-recursive solution consider this:
you have
x_n = a x_(n-1) + b x_(n-2)
This means that the change of x is more or less proportional to x as x_n and x_(n-1) will be of same order of magnitude. In other words we are looking for a function type giving
df(n)/dn ~ f(n)
This is something exponential. So the above assumption is
x_n = alpha t^n + beta s^n
(later when solving for s and t the motivation for this becomes clear) from the start values we get
alpha + beta = 1
and
alpha t + beta s = 2
The recursion provides
alpha t^n + beta s^n = a ( alpa t^(n-1) + beta s^(n-1) ) + b ( alpa t^(n-2) + beta s^(n-2) )
or
t^2 alpha t^(n-2) + s^2 beta s^(n-2) = a ( t alpa t^(n-2) + s beta s^(n-2) ) + b ( alpa t^(n-2) + beta s^(n-2) )
This equation holds for all n such that you can derive an equation for t and s.
Plugging in the results in the above equations gives you the non-recursive solution.
Try to reproduce it and then go for the actual task.
Cheers.

Exponential sum using recursion in Python

I found here: exponential sum using recursion.python
Exactly the same problem with the same conditions to implement.
A brief description: We have started studying recursion and got some questions to solve using only recursion without any loop.
So we are asked to write a function calculating the exponential sum.
So here are my tries:
def exp_n_x(n, x):
if n <= 0:
return 1
return (x/n)*exp_n_x(n-1, x)
It actually only calculates the n'th one, without summing up the others to i=0.
I tried to make the function sum every exponential element so:
def exp_n_x(n, x):
if n <= 0:
return 1
sum = (x/n)*exp_n_x(n-1, x)
n = n - 1
return sum + (x/n)*exp_n_x(n-1, x)
But it doesn't help me...
Thanks.
You are pretty close to a solution in the first function, but you are missing two critical things: you need to raise x to the power of n and divide it by n! (n-factorial). The factorial function is the product of all integers from 1 to n, with a special case that 0! is 1. Also, you are creating a product when you need a sum. Putting these together you have:
def factorial(n):
if n < 2:
return 1
return n * factorial(n - 1)
def exp_n_x(n, x):
if n < 1:
return 1
return x ** n / factorial(n) + exp_n_x(n - 1, x)
I think your problem is that the sum you're computing has terms that can be computed from the previous terms, but not (as far as I can see) from the previous sums. So you may need to have two separate recursive parts to your code. One computes the values of the next term based on the previous term, and one that adds the new term to the previous sum.
def term(n, x):
if n <= 0:
return 1
return x / n * term(n-1, x)
def exp_sum(n, x):
if n <= 0:
return 1
return exp_sum(n-1, x) + term(n, x)
This is hideously inefficient, since the terms for the smaller n values get computed over and over. But that's probably OK for learning about recursion (I expect you'll learn about ways to avoid this issue with memoziation or dynamic programming eventually).
Note that you can combine the two functions into one, as long as you don't mind changing the function signature and returning two values at once (in a tuple) from the recursion. You could add a non-recursive helper function to make the user-facing function work as expected:
def exp_sum_recursive(n, x): # this function returns term, sum tuples
if n <= 0:
return 1, 1
term, sum = exp_sum_recursive(n-1, x)
term *= x / n # each new term is based off of the previous term
return term, sum + term # the new sum adds the new term to the old sum
def exp_sum(n, x): # this is a non-recursive helper function
return exp_sum_recursive(n, x)[1] # it only returns the sum from the recursive version
Since you've achieved recursion with exp_n_x() why throw an inefficient recursive factorial() in the mix when Python already provides us with one:
from math import factorial
def exp_n_x(n, x):
return 1 if n < 1 else x ** n / factorial(n) + exp_n_x(n - 1, x)

Mutable variables in Haskell?

I'm starting to wrap my head around Haskell and do some exciting experiments. And there's one thing I just seem to be unable to comprehend (previous "imperativist" experience talks maybe).
Recently, I was yearning to implement integer division function as if there where no multiply/divide operations. An immensely interesting brain-teaser which led to great confusion.
divide x y =
if x < y then 0
else 1 + divide (x - y) y
I compiled it and it.. works(!). That's mind-blowing. However, I was told, I was sure that variables are immutable in Haskell. How comes that with each recursive step variable x keeps it's value from previous step? Or is my glorious compiler lying to me? Why does it work at all?
Your x here doesn't change during one function call (i.e., after creation) - that's exactly what immutable means. What does change is value of x during multiple (recursive) calls. In a single stack frame (function call) the value of x is constant.
An example of execution of your code, for a simple case
call divide 8 3 -- (x = 8, y = 3), stack: divide 8 3
step 1: x < y ? NO
step 2: 1 + divide 5 3
call: divide 5 3 -- (x = 5, y = 3), stack: divide 8 3, divide 5 3
step 1: x < y ? NO
step 2: 1 + divide 2 3
call divide 2 3 -- (x = 2, y = 3), stack: divide 8 3, divide 5 3, divide 2 3
step 1: x < y ? YES
return: 0 -- unwinding bottom call
return 1 + 0 -- stack: divide 8 3, divide 5 3, unwinding middle call
return 1 + 1 + 0 -- stack: divide 8 3
I am aware that the above notation is not anyhow formalized, but I hope it helps to understand what recursion is about and that x might have different values in different calls, because it's simply a different instance of whole call, thus also different instance of x.
x is actually not a variable, but a parameter, and isn't that different from parameters in imperative languages.
Maybe it'd look more obvious with explicit return statements?
-- for illustrative purposes only, doesn't actually work
divide x y =
if x < y
then return 0
else return 1 + divide (x - y) y
You're not mutating x, just stacking up several function calls to calculate your desired result with the values they return.
Here's the same function in Python:
def divide(x, y):
if x < y:
return 0
else:
return 1 + divide(x - y, y)
Looks familiar, right? You can translate this to any language that allows recursion, and none of them would require you to mutate a variable.
Other than that, yes, your compiler is lying to you. Because you're not allowed to directly mutate values, the compiler can make a lot of extra assumptions based on your code, which helps translating it to efficient machine code, and at that level, there's no escaping mutability. The major benefit is that compilers are way less likely to introduce mutability-related bugs than us mortals.

How to implement Frobenius pseudoprime algorithm?

Someone told me that the Frobenius pseudoprime algorithm take three times longer to run than the Miller–Rabin primality test but has seven times the resolution. So then if one where to run the former ten times and the later thirty times, both would take the same time to run, but the former would provide about 233% more analyse power. In trying to find out how to perform the test, the following paper was discovered with the algorithm at the end:
A Simple Derivation for the Frobenius Pseudoprime Test
There is an attempt at implementing the algorithm below, but the program never prints out a number. Could someone who is more familiar with the math notation or algorithm verify what is going on please?
Edit 1: The code below has corrections added, but the implementation for compute_wm_wm1 is missing. Could someone explain the recursive definition from an algorithmic standpoint? It is not "clicking" for me.
Edit 2: The erroneous code has been removed, and an implementation of the compute_wm_wm1 function has been added below. It appears to work but may require further optimization to be practical.
from random import SystemRandom
from fractions import gcd
random = SystemRandom().randrange
def find_prime_number(bits, test):
number = random((1 << bits - 1) + 1, 1 << bits, 2)
while True:
for _ in range(test):
if not frobenius_pseudoprime(number):
break
else:
return number
number += 2
def frobenius_pseudoprime(integer):
assert integer & 1 and integer >= 3
a, b, d = choose_ab(integer)
w1 = (a ** 2 * extended_gcd(b, integer)[0] - 2) % integer
m = (integer - jacobi_symbol(d, integer)) >> 1
wm, wm1 = compute_wm_wm1(w1, m, integer)
if w1 * wm != 2 * wm1 % integer:
return False
b = pow(b, (integer - 1) >> 1, integer)
return b * wm % integer == 2
def choose_ab(integer):
a, b = random(1, integer), random(1, integer)
d = a ** 2 - 4 * b
while is_square(d) or gcd(2 * d * a * b, integer) != 1:
a, b = random(1, integer), random(1, integer)
d = a ** 2 - 4 * b
return a, b, d
def is_square(integer):
if integer < 0:
return False
if integer < 2:
return True
x = integer >> 1
seen = set([x])
while x * x != integer:
x = (x + integer // x) >> 1
if x in seen:
return False
seen.add(x)
return True
def extended_gcd(n, d):
x1, x2, y1, y2 = 0, 1, 1, 0
while d:
n, (q, d) = d, divmod(n, d)
x1, x2, y1, y2 = x2 - q * x1, x1, y2 - q * y1, y1
return x2, y2
def jacobi_symbol(n, d):
j = 1
while n:
while not n & 1:
n >>= 1
if d & 7 in {3, 5}:
j = -j
n, d = d, n
if n & 3 == 3 == d & 3:
j = -j
n %= d
return j if d == 1 else 0
def compute_wm_wm1(w1, m, n):
a, b = 2, w1
for shift in range(m.bit_length() - 1, -1, -1):
if m >> shift & 1:
a, b = (a * b - w1) % n, (b * b - 2) % n
else:
a, b = (a * a - 2) % n, (a * b - w1) % n
return a, b
print('Probably prime:\n', find_prime_number(300, 10))
You seem to have misunderstood the algorithm completely due to not being familiar with the notation.
def frobenius_pseudoprime(integer):
assert integer & 1 and integer >= 3
a, b, d = choose_ab(integer)
w1 = (a ** 2 // b - 2) % integer
That comes from the line
W0 ≡ 2 (mod n) and W1 ≡ a2b−1 − 2 (mod n)
But the b-1 doesn't mean 1/b here, but the modular inverse of b modulo n, i.e. an integer c with b·c ≡ 1 (mod n). You can most easily find such a c by continued fraction expansion of b/n or, equivalently, but with slightly more computation, by the extended Euclidean algorithm. Since you're probably not familiar with continued fractions, I recommend the latter.
m = (integer - d // integer) // 2
comes from
n − (∆/n) = 2m
and misunderstands the Jacobi symbol as a fraction/division (admittedly, I have displayed it here even more like a fraction, but since the site doesn't support LaTeX rendering, we'll have to make do).
The Jacobi symbol is a generalisation of the Legendre symbol - denoted identically - which indicates whether a number is a quadratic residue modulo an odd prime (if n is a quadratic residue modulo p, i.e. there is a k with k^2 ≡ n (mod p) and n is not a multiple of p, then (n/p) = 1, if n is a multiple of p, then (n/p) = 0, otherwise (n/p) = -1). The Jacobi symbol lifts the restriction that the 'denominator' be an odd prime and allows arbitrary odd numbers as 'denominators'. Its value is the product of the Legendre symbols with the same 'numerator' for all primes dividing n (according to multiplicity). More on that, and how to compute Jacobi symbols efficiently in the linked article.
The line should correctly read
m = (integer - jacobi_symbol(d,integer)) // 2
The following lines I completely fail to understand, logically, here should follow the calculation of
Wm and Wm+1 using the recursion
W2j ≡ Wj2 − 2 (mod n)
W2j+1 ≡ WjWj+1 − W1 (mod n)
An efficient method of using that recursion to compute the required values is given around formula (11) of the PDF.
w_m0 = w1 * 2 // m % integer
w_m1 = w1 * 2 // (m + 1) % integer
w_m2 = (w_m0 * w_m1 - w1) % integer
The remainder of the function is almost correct, except of course that it now gets the wrong data due to earlier misunderstandings.
if w1 * w_m0 != 2 * w_m2:
The (in)equality here should be modulo integer, namely if (w1*w_m0 - 2*w_m2) % integer != 0.
return False
b = pow(b, (integer - 1) // 2, integer)
return b * w_m0 % integer == 2
Note, however, that if n is a prime, then
b^((n-1)/2) ≡ (b/n) (mod n)
where (b/n) is the Legendre (or Jacobi) symbol (for prime 'denominators', the Jacobi symbol is the Legendre symbol), hence b^((n-1)/2) ≡ ±1 (mod n). So you could use that as an extra check, if Wm is not 2 or n-2, n can't be prime, nor can it be if b^((n-1)/2) (mod n) is not 1 or n-1.
Probably computing b^((n-1)/2) (mod n) first and checking whether that's 1 or n-1 is a good idea, since if that check fails (that is the Euler pseudoprime test, by the way) you don't need the other, no less expensive, computations anymore, and if it succeeds, it's very likely that you need to compute it anyway.
Regarding the corrections, they seem correct, except for one that made a glitch I previously overlooked possibly worse:
if w1 * wm != 2 * wm1 % integer:
That applies the modulus only to 2 * wm1.
Concerning the recursion for the Wj, I think it is best to explain with a working implementation, first in toto for easy copy and paste:
def compute_wm_wm1(w1,m,n):
a, b = 2, w1
bits = int(log(m,2)) - 2
if bits < 0:
bits = 0
mask = 1 << bits
while mask <= m:
mask <<= 1
mask >>= 1
while mask > 0:
if (mask & m) != 0:
a, b = (a*b-w1)%n, (b*b-2)%n
else:
a, b = (a*a-2)%n, (a*b-w1)%n
mask >>= 1
return a, b
Then with explanations in between:
def compute_wm_wm1(w1,m,n):
We need the value of W1, the index of the desired number, and the number by which to take the modulus as input. The value W0 is always 2, so we don't need that as a parameter.
Call it as
wm, wm1 = compute_wm_wm1(w1,m,integer)
in frobenius_pseudoprime (aside: not a good name, most of the numbers returning True are real primes).
a, b = 2, w1
We initialise a and b to W0 and W1 respectively. At each point, a holds the value of Wj and b the value of Wj+1, where j is the value of the bits of m so far consumed. For example, with m = 13, the values of j, a and b develop as follows:
consumed remaining j a b
1101 0 w_0 w_1
1 101 1 w_1 w_2
11 01 3 w_3 w_4
110 1 6 w_6 w_7
1101 13 w_13 w_14
The bits are consumed left-to-right, so we have to find the first set bit of m and place our 'pointer' right before it
bits = int(log(m,2)) - 2
if bits < 0:
bits = 0
mask = 1 << bits
I subtracted a bit from the computed logarithm just to be entirely sure that we don't get fooled by a floating point error (by the way, using log limits you to numbers of at most 1024 bits, about 308 decimal digits; if you want to treat larger numbers, you have to find the base-2 logarithm of m in a different way, using log was the simplest way, and it's just a proof of concept, so I used that here).
while mask <= m:
mask <<= 1
Shift the mask until it's greater than m,so the set bit points just before m's first set bit. Then shift one position back, so we point at the bit.
mask >>= 1
while mask > 0:
if (mask & m) != 0:
a, b = (a*b-w1)%n, (b*b-2)%n
If the next bit is set, the value of the initial portion of consumed bits of m goes from j to 2*j+1, so the next values of the W sequence we need are W2j+1 for a and W2j+2 for b. By the above recursion formula,
W_{2j+1} = W_j * W_{j+1} - W_1 (mod n)
W_{2j+2} = W_{j+1}^2 - 2 (mod n)
Since a was Wj and b was Wj+1, a becomes (a*b - W_1) % n and b becomes (b * b - 2) % n.
else:
a, b = (a*a-2)%n, (a*b-w1)%n
If the next bit is not set, the value of the initial portion of consumed bits of m goes from j to 2*j, so a becomes W2j = (Wj2 - 2) (mod n), and b becomes
W2j+1 = (Wj * Wj+1 - W1) (mod n).
mask >>= 1
Move the pointer to the next bit. When we have moved past the final bit, mask becomes 0 and the loop ends. The initial portion of consumed bits of m is now all of m's bits, so the value is of course m.
Then we can
return a, b
Some additional remarks:
def find_prime_number(bits, test):
while True:
number = random(3, 1 << bits, 2)
for _ in range(test):
if not frobenius_pseudoprime(number):
break
else:
return number
Primes are not too frequent among the larger numbers, so just picking random numbers is likely to take a lot of attempts to hit one. You will probably find a prime (or probable prime) faster if you pick one random number and check candidates in order.
Another point is that such a test as the Frobenius test is disproportionally expensive to find that e.g. a multiple of 3 is composite. Before using such a test (or a Miller-Rabin test, or a Lucas test, or an Euler test, ...), you should definitely do a bit of trial division to weed out most of the composites and do the work only where it has a fighting chance of being worth it.
Oh, and the is_square function isn't prepared to deal with arguments less than 2, divide-by-zero errors lurk there,
def is_square(integer):
if integer < 0:
return False
if integer < 2:
return True
x = integer // 2
should help.

Resources