How do I easily rewrite nat.succ (nat.succ 0) as 2? - lean

Say my proof goal includes nat.succ (nat.succ 0), and I want to quickly rewrite it to say 2; I can define a whole new theorem:
theorem succ_succ_zero_eq_two : nat.succ (nat.succ 0) = 2 := rfl
then use that theorem with rw, but this seems very clunky. Is there any way to do this in a single line in my proof?

A simpler solution would use the change tactic:
change 2
See also ac_change, which can rearrange sums and products as well.

Use rw (show nat.succ (nat.succ 0) = 2, by refl),

Related

erase common constant of equation using leanprover

I want to prove below goal.
n_n: ℕ
n_ih: n_n * (n_n + 1) / 2 = arith_sum n_n
⊢ (n_n + 1) * (n_n + 1 + 1) / 2 = n_n + 1 + n_n * (n_n + 1) / 2
ring, simp, linarith is not working.
Also I tried calc, but it too long.
Is there any automatic command to erase common constant in equation?
I would say that you were asking the wrong question. Your hypothesis and goal contain / but this is not mathematical division, this is a pathological function which computer scientists use, which takes as input two natural numbers and is forced to return a natural number, so often can't return the right answer. For example 5 / 2 = 2 with the division you're using. Computer scientists call it "division with remainder" and I call it "broken and should never be used". When I'm doing this sort of exercise with my class I always coerce everything to the rationals before I do the division, so the division is mathematical division rather than this pathological function which does not satisfy things like (a / b) * b = a. The fact that this division doesn't obey the rules of normal division is why you can't get the tactics to work. If you coerce everything to the rationals before doing the division then you won't get into this mess and ring will work fine.
If you do want to persevere down the natural division road then one approach would be to start doing things proving that n(n+1) is always even, so that you can deduce n(n+1)/2)*2=n(n+1). Alternatively you could avoid this by observing that to show A/2=B/2 it suffices to prove that A=B. But either way you'll have to do a few lines of manual fiddling because you're not using mathematical functions, you're using computer science approximations, so mathematical tactics don't work with them.
Here's what my approach looks like:
import algebra.big_operators
open_locale big_operators
open finset
def arith_sum (n : ℕ) := ∑ i in range n, (i : ℚ) -- answer is rational
example (n : ℕ) : arith_sum n = n*(n-1)/2 :=
begin
unfold arith_sum,
induction n with d hd,
{ simp },
{ rw [finset.sum_range_succ, hd, nat.succ_eq_add_one],
push_cast,
ring, -- works now
}
end

Is there a tactic for solving such trivial goals (lean theorem proving)?

I'm a beginner and I'm stuck with the following:
import tactic.linarith
import tactic.suggest
noncomputable theory
open_locale classical
lemma two_ne_four_mul_any (n:ℕ) : 2 ≠ 2 * 2 * n := begin
cases n,
linarith,
rw mul_assoc,
???
end
The state is now:
n : ℕ
⊢ 2 ≠ 2 * (2 * n.succ)
and it seams so trivial, that I thought there must be a tactic for solving it. But linarith, ring, simp, trivial don't work.
So, did I miss some important import?
I also tried to solve this using existing lemmas. In a first step I wanted to reach:
n : ℕ
⊢ 1 ≠ 2 * n.succ
in the hope that some higher level tactic would now see that it is true. However, I don't know how to do some operation on both sides of an equation. Shouldn't it be somehow possible to divide both sides by 2?
My plan was to proceed by changing the rhs to 2*(n+1) and 2n+2 and maybe the goal to
⊢ 0 ≠ 2 * n + 1
in the hope of finding applicable lemmas in the library.
linarith knows linear arithmetic, and this is a linear arithmetic goal, but it is obscured by the use of nat.succ. If you rewrite it away then linarith will work.
example (n : ℕ): 2 ≠ 2 * (2 * n.succ) :=
by rw nat.succ_eq_add_one; linarith

Sympy - Limit with parameter constraint

I try to calculate the limit of a function with a constraint on one of its parameters. Unfortunately, I got stuck with the parameter constraint.
I used the following code where 0 < alpha < 1 should be assumed
import sympy
sympy.init_printing()
K,L,alpha = sympy.symbols("K L alpha")
Y = (K**alpha)*(L**(1-alpha))
sympy.limit(sympy.assumptions.refine(Y.subs(L,1),sympy.Q.positive(1-alpha) & sympy.Q.positive(alpha)),K,0,"-")
Yet, this doesn't work. Is there any possibility to handle assumptions as in Mathematica?
Best and thank you,
Fabian
To my knowledge, the assumptions made by the Assumptions module are not yet understood by the rest of SymPy. But limit can understand an assumption that is imposed at the time a symbol is created:
K, L = sympy.symbols("K L")
alpha = sympy.Symbol("alpha", positive=True)
Y = (K**alpha)*(L**(1-alpha))
sympy.limit(Y.subs(L, 1), K, 0, "-")
The limit now evaluates to 0.
There isn't a way to declare a symbol to be a number between 0 and 1, but one may be able to work around this by declaring a positive symbol, say t, and letting L = t/(1+t).

How to simplify polynomials in sqrt() to its absolute value of factor in maxima?

sqrt(a^2+2*a+1) can be easily rewritten as |a+1|. I would like to do this in maxima, however cannot make it work. Although sqrt(a^2) is automatically simplified to |a|, sqrt(a^2+2*a+1) is not. And radcan(sqrt(a^2+2*a+1)) give a+1, which is incorrect. Is there anyway to get the right simplification in Maxima?
Yep. Basically, you just have to tell Maxima to try a bit harder to factorise the inside of the square root. For example:
(%i1) x: sqrt(a^2 + 2*a + 1);
2
(%o1) sqrt(a + 2 a + 1)
(%i2) factor(a^2 + 2*a + 1);
2
(%o2) (a + 1)
(%i3) map (factor, x);
(%o3) abs(a + 1)
(%i4)
The map here means that the function factor should be applied to each of the arguments of sqrt. What happens is that you get sqrt((a+1)^2) appear on the way, and this is automatically simplified to abs(a+1).
Note that the answer from radcan is correct for some values of a. As I understand it, this is all that radcan guarantees: it's useful for answering "Yikes! Is there a simpler way to think about this crazy expression?", but not particularly helpful for "Hmm, I'm not sure what the variables in this are. Is there a simpler form?"

Memoizing a Haskell Array

Continuing with my looking into CRCs via Haskell, I've written the following code to generate a table for CRC32 calculation:
crc32Table = listArray (0, 255) $ map (tbl 0xEDB88320) [0..255]
tbl polynomial byte = (iterate f byte) !! 8
where f r = xor (shift r (-1)) ((r .&. 1) * polynomial)
This correctly generates the table. I want to make frequent accesses to this table but 1) don't want to hardcode the results into code and 2) don't want to recalculate this table every time I reference it.
How would I memoize this array in Haskell? The Haskell memoization pages haven't given me any clues.
The discussion at this question should help explain what's going on: When is memoization automatic in GHC Haskell?
As folks have said in comments, crc32Table, if it is monomorphically typed should only be computed once and retained.

Resources