erase common constant of equation using leanprover - lean

I want to prove below goal.
n_n: ℕ
n_ih: n_n * (n_n + 1) / 2 = arith_sum n_n
⊢ (n_n + 1) * (n_n + 1 + 1) / 2 = n_n + 1 + n_n * (n_n + 1) / 2
ring, simp, linarith is not working.
Also I tried calc, but it too long.
Is there any automatic command to erase common constant in equation?

I would say that you were asking the wrong question. Your hypothesis and goal contain / but this is not mathematical division, this is a pathological function which computer scientists use, which takes as input two natural numbers and is forced to return a natural number, so often can't return the right answer. For example 5 / 2 = 2 with the division you're using. Computer scientists call it "division with remainder" and I call it "broken and should never be used". When I'm doing this sort of exercise with my class I always coerce everything to the rationals before I do the division, so the division is mathematical division rather than this pathological function which does not satisfy things like (a / b) * b = a. The fact that this division doesn't obey the rules of normal division is why you can't get the tactics to work. If you coerce everything to the rationals before doing the division then you won't get into this mess and ring will work fine.
If you do want to persevere down the natural division road then one approach would be to start doing things proving that n(n+1) is always even, so that you can deduce n(n+1)/2)*2=n(n+1). Alternatively you could avoid this by observing that to show A/2=B/2 it suffices to prove that A=B. But either way you'll have to do a few lines of manual fiddling because you're not using mathematical functions, you're using computer science approximations, so mathematical tactics don't work with them.
Here's what my approach looks like:
import algebra.big_operators
open_locale big_operators
open finset
def arith_sum (n : ℕ) := ∑ i in range n, (i : ℚ) -- answer is rational
example (n : ℕ) : arith_sum n = n*(n-1)/2 :=
begin
unfold arith_sum,
induction n with d hd,
{ simp },
{ rw [finset.sum_range_succ, hd, nat.succ_eq_add_one],
push_cast,
ring, -- works now
}
end

Related

Is there a tactic for solving such trivial goals (lean theorem proving)?

I'm a beginner and I'm stuck with the following:
import tactic.linarith
import tactic.suggest
noncomputable theory
open_locale classical
lemma two_ne_four_mul_any (n:ℕ) : 2 ≠ 2 * 2 * n := begin
cases n,
linarith,
rw mul_assoc,
???
end
The state is now:
n : ℕ
⊢ 2 ≠ 2 * (2 * n.succ)
and it seams so trivial, that I thought there must be a tactic for solving it. But linarith, ring, simp, trivial don't work.
So, did I miss some important import?
I also tried to solve this using existing lemmas. In a first step I wanted to reach:
n : ℕ
⊢ 1 ≠ 2 * n.succ
in the hope that some higher level tactic would now see that it is true. However, I don't know how to do some operation on both sides of an equation. Shouldn't it be somehow possible to divide both sides by 2?
My plan was to proceed by changing the rhs to 2*(n+1) and 2n+2 and maybe the goal to
⊢ 0 ≠ 2 * n + 1
in the hope of finding applicable lemmas in the library.
linarith knows linear arithmetic, and this is a linear arithmetic goal, but it is obscured by the use of nat.succ. If you rewrite it away then linarith will work.
example (n : ℕ): 2 ≠ 2 * (2 * n.succ) :=
by rw nat.succ_eq_add_one; linarith

Recursive arithmetic sequence in Haskell

It's been nearly 30 years since I took an Algebra class and I am struggling with some of the concepts in Haskell as I work through Learn you a Haskell. The concept that I am working on now is "recursion". I have watched several youtube videos on the subject and found a site with the arithmetic sequence problem: an = 8 + 3(an-1) which I understand to be an = an-1 + 3 This is what I have in Haskell.
addThree :: (Integral a) => a -> a
addThree 1 = 8
addThree n = (n-1) + 3
Running the script yields:
addThree 1
8
addThree 2
4
addThree 3
6
I am able to solve this and similar recursions on paper, (after polishing much rust), but do not understand the syntax in Haskell.
My Question How do I define the base and the function in Haskell as per my example?
If this is not the place for such questions, kindly direct me to where I should post. I see there are Stack Exchanges for Super User, Programmers, and Mathematics, but not sure which of the Stack family best fits my question.
First a word on Algebra and you problem: I think you are slightly wrong - if we write 3x it usually means 3*x (Mathematicans are even more lazy then programmers) so your series indeed should look like an = 8 + 3*an-1 IMO
Then an is the n-th element in a series of a's: a0, a1, a2, a3, ... that's why you there is a big difference between (n-1) and addThree (n-1) as the last one would designate an-1 while the first one would just be a number not really connected to your series.
Ok, let's have a look at your series an = 8 + 3an-1 (this is how I would understand it - because otherwise you would have x=8+3*x and therefore just x = -4:
you can choose a0 - let's say it`s 0 (as you did?)
then a1=8+3*0 = 8
a2=8+3*8 = 4*8 = 32
a3=8+3*32 = 8+3*32 = 104
...
ok let's say you want to use recursion than the problem directly translates into Haskell:
a :: Integer -> Integer
a 0 = 0
a n = 8 + 3 * a (n-1)
series :: [Integer]
series = map a [0..]
giving you (for the first 5 elements):
λ> take 5 series
[0,8,32,104,320]
Please note that this is a very bad performing way to do it - as the recursive call in a really does the same work over and over again.
A technical way to solve this is to observe that you only need the previous element to get the next one and use Data.List.unfoldr:
series :: [Integer]
series = unfoldr (\ prev -> Just (prev, 8 + 3 * prev)) 0
now of course you can get a lot more fancier with Haskell - for example you can define the series as it is (using Haskells laziness):
series :: [Integer]
series = 0 : map (\ prev -> 8 + 3 * prev) series
and I am sure there are much more ways out there to do it but I hope this will help you along a bit

Does ((a^x) ^ 1/x) == a in Zp? (for Jablon's protocol)

I have to implement Jablon's protocol (paper) but I've been sitting on a bug for two hours.
I'm not very good with math so I don't know if it's my fault in writing it or it just isn't possible. If it isn't possible, I don't see how Jablon's protocol can be implemented since it relies on the fact that ((gP ^ x) ^ yi) ^ (1/x) == gP^yi .
Take the following code. It doesn't work.
BigInteger p = new BigInteger("101");
BigInteger a = new BigInteger("83");
BigInteger x = new BigInteger("13");
BigInteger ax = a.modPow(x, p);
BigInteger xinv = x.modInverse(p);
BigInteger axxinv = ax.modPow(xinv, p);
if (a.equals(axxinv))
System.out.println("Yay!");
else
System.out.println("How is this possible?");
Your problem is that you're not calculating k(1/x) correctly. We need k(1/x))k to be x. Fermat's Little Theorem tells us that kp-1 is 1 mod p. Therefore we want to find y such that x * y is 1 mod p-1, not mod p.
So you want BigInteger xinv = x.modInverse(p-1);.
This will not work if x shares a common factor with p-1. (Your case avoids that.) For that, you need additional theory.
If p is a prime, then r is a primitive root if none of r, r^2, r^3, ..., r^(p-2) are congruent to 1 mod p. There is no simple algorithm to produce a primitive root, but they are common so you usually only need to check a few. (For p=101, the first number I tried, 2, turned out to be a primitive root. 83 is also.) Testing them would seem to be hard, but it isn't so bad since it turns out that (omitting a bunch of theory here) only divisors of p-1 need to be checked. For instance for 101 you only need to check the powers 1, 2, 4, 5, 10, 20, 25 and 50.
Now if r is a primitive root, then every number mod p is some power of r. What power? That's called the discrete logarithm problem and is not simple. (It's difficulty is the basis of RSA, which is a well known cryptography system.) You can do it with trial division. So trying 1, 2, 3, ... you eventually find that, for instance, 83 is 2^89 (mod 101).
But once we know that every number from 1 to 100 is 2 to some power, we are armed with a way to calculate roots. Because raising a number to the power of x just multiplies the exponent by x. And 2^100 is 1. So exponentiation is multiplying by x (mod 100).
So suppose that we want y ^ 13 to be 83. Then y is 2^k for some k such that k * 13 is 89. If you play around with the Chinese Remainder Theorem you can realize that k = 53 works. Therefore 2^53 (mod 101) = 93 is the 13'th root of 89.
That is harder than what we did before. But suppose that we wanted to take, say, the 5th root of 44 mod 101. We can't use the simple procedure because 5 does not have a multiplicative inverse mod 100. However 44 is 2^15. Therefore 2^3 = 8 is a 5th root. But there are 4 others, namely 2^23, 2^43, 2^63 and 2^83.

Trying to get my head around recursion in Haskell?

I have used many recursive functions now but still have trouble getting my head around how such a function exactly works (i'm familiar with the second line (i.e. | n==0 = 1) but am not so familiar with the final line (i.e. | n>0 = fac (n-1) * n)).
fac :: Int -> Int
fac n
| n==0 = 1
| n>0 = fac (n-1) * n
Recursive algorithms are very closely linked to mathematical induction. Perhaps studying one will help you better understand the other.
You need to keep two key principles in mind when using recursion:
Base Case
Inductive Step
The Inductive Step is often the most difficult piece, because it assumes that everything it relies upon has already been computed correctly. Making this leap of faith can be difficult (at least it took me a while to get the hang of it), but it is only because we've got preconditions on our functions; those preconditions (in this case, that n is a non-negative integer) must be specified so that the inductive step and base case are always true.
The Base Case is also sometimes difficult: say, you know that the factorial N! is N * (N-1)!, but how exactly do you handle the first step on the ladder? (In this case, it is easy, define 0! := 1. This explicit definition provides you with a way to terminate the recursive application of your Inductive Step.)
You can see your type specification and guard patterns in this function are providing the preconditions that guarantee the Inductive Step can be used over and over again until it reaches the Base Case, n == 0. If the preconditions can't be met, recursive application of the Inductive Step would fail to reach the Base Case, and your computation would never terminate. (Well, it would when it runs out of memory. :)
One complicating factor, especially with functional programming languages, is the very strong desire to re-write all 'simple' recursive functions, as you have here, with variants that use Tail Calls or Tail Recursion.
Because this function calls itself, and then performs another operation on the result, you can build a call-chain like this:
fac 3 3 * fac 2
fac 2 2 * fac 1
fac 1 1 * fac 0
fac 0 1
fac 1 1
fac 2 2
fac 3 6
That deep call stack takes up memory; but a compiler that notices that a function doesn't change any state after making a recursive call can optimize away the recursive calls. These kinds of functions typically pass along an accumulator argument. A fellow stacker has a very nice example: Tail Recursion in Haskell
factorial 1 c = c
factorial k c = factorial (k-1) (c*k)
This very complicated change :) means that the previous call chain is turned into this:
fac 3 1 fac 2 3
fac 2 3 fac 1 6
fac 1 6 6
(The nesting is there just for show; the runtime system wouldn't actually store details of the execution on the stack.)
This runs in constant memory, regardless of the value of n, and thus this optimization can convert 'impossible' algorithms into 'possible' algorithms. You'll see this kind of technique used extensively in functional programming, much as you'd see char * frequently in C programming or yield frequently in Ruby programming.
When you write | condition = expression it introduces a guard. The guards are tried in order from top to bottom until a true condition is found, and the corresponding expression is the result of your function.
This means that if n is zero, the result is 1, otherwise if n > 0 the result is fac (n-1) * n. If n is negative you get an incomplete pattern match error.
Once you've determined which expression to use, it's just a matter of substituting in the recursive calls to see what's going on.
fac 4
(fac 3) * 4
((fac 2) * 3) * 4
(((fac 1) * 2) * 3) * 4
((((fac 0) * 1) * 2) * 3) * 4
(((1 * 1) * 2) * 3) * 4
((1 * 2) * 3) * 4
(2 * 3) * 4
6 * 4
24
Especially for more complicated cases of recursion, the trick to save mental health is not to follow recursive calls, but just assume that they "do the right thing". E.g. in your fac example, you want to compute fac n. Imagine you already have the result fac (n-1). Then it's trivial to calculate fac n: just multiply it by n. But the magic of induction is that this reasonig actually works (as long as you provide a proper base case in order to terminate recursion). So e.g. for Fibonacci numbers, just look what the base case is, and assume that you are able to calculate the function for all numbers smaller then n:
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
See? You want to calculate fib n. It's easy if you would know fib (n-1) and fib (n-2). But you can simply assume you are able to calculate them, and that the "deeper levels" of recursion do the "right thing". So just use them, it will work.
Note that there are much better ways to write this function, as currently many values are recalculated very often.
BTW: The "best" way to write fac would be fac n = product [1..n].
whats throwing you? maybe the guards (the |) are confusing things.
You can think of the guards loosely as a chain of ifs, or a switch statement (difference being only one can run, and it directly evaluates to a result. does NOt perform a series of tasks, and certainly no side-effects. just evaluates to a value)
To pan imperative-like seudo-code....
Fac n:
if n == 0: return 1
if n > 0: return n * (result of calling fac w/ n decreased by one)
The call tree by other poster looks like it could be helpful. do yourself a favor and really walk through it

Computing recurrence relations in Haskell

Greetings, StackOverflow.
Let's say I have two following recurrence relations for computing S(i,j)
I would like to compute values S(0,0), S(0,1), S(1,0), S(2,0), etc... in asymptotically optimal way. Few minutes with pencil and paper reveal that it unfolds into treelike structure which can be transversed in several ways. Now, it's unlikely tree will be useful later on, so for now I'm looking to produce nested list like [[S(00)],[S(10),S(01)],[S(20),S(21),S(12),S(02)],...]. I have created a function to produce a flat list of S(i,0) (or S(0,j), depending on first argument):
osrr xpa p predexp = os00 : os00 * (xpa + rp) : zipWith3 osrr' [1..] (tail osrr) osrr
where
osrr' n a b = xpa * a + rp * n * b
os00 = sqrt (pi/p) * predexp
rp = recip (2*p)
I am, however, at loss as how to proceed further.
I would suggest writing it in a direct recursive style and using memoization to create your traversal:
import qualified Data.MemoCombinators as Memo
osrr p = memoed
where
memoed = Memo.memo2 Memo.integral Memo.integral osrr'
osrr' a b = ... -- recursive calls to memoed (not osrr or osrr')
The library will create an infinite table to store values you have already computed. Because the memo constructors are under the p parameter, the table exists for the scope of p; i.e. osrr 1 2 3 will create a table for the purpose of computing A(2,3), and then clean it up. You can reuse the table for a specific p by partially applying:
osrr1 = osrr p
Now osrr1 will share the table between all its calls (which, depending on your situation, may or may not be what you want).
First, there must be some boundary conditions that you've not told us about.
Once you have those, try stating the solution as a recursively defined array. This works as long as you know an upper bound on i and j. Otherwise, use memo combinators.

Resources