Use obtain in tactic mode in Lean theorem prover - lean

How do I use a hypothesis of the shape
H : exists x, P x
in tactic mode? In term mode I would use
obtain x Hx, from H,

It's exactly the same syntax:
example (A : Type) (p : A × A) : A :=
begin
obtain x y, from p, x
end
You can of course re-enter tactic mode by using begin...end after from.

Related

Difference between the curly and round brackets for propositions in a theorem

I'm very confused by the use of braces for propositions in a theorem. See the following four snippets:
theorem contrapositive_1 : ∀ (P Q : Prop),
(P -> Q) -> (¬ Q -> ¬ P) := sorry
theorem contrapositive_2 (P Q : Prop) :
(P -> Q) -> (¬ Q -> ¬ P) := sorry
theorem contrapositive_3 : ∀ {P Q : Prop},
(P -> Q) -> (¬ Q -> ¬ P) := sorry
theorem contrapositive_4 {P Q : Prop} :
(P -> Q) -> (¬ Q -> ¬ P) := sorry
I thought, they were all the same, but clearly they are not, because when I want to use contraprositive_1 and contraprositive_2 in the proof of another theorem, lean shows an error for type mismatch:
term h1 has type P → Q : Prop but is expected to have type Prop : Type
On the other hand, contraprositive_3 and contraprositive_4 work fine.
What is the difference between the curly and round brackets?
The difference is whether the argument is explicit () or implicit {}. In general, you want arguments to be explicit, unless they can be figured out from arguments that come later. For example
lemma foobar {X : Type} (x : X) : x = x := sorry
In this case X is implicit, because once you tell Lean about x, it can figure out X itself (it is the type of x). In other words, if you want to apply the lemma to y : Y you can just write foobar y.
If instead you would make
lemma quux (X : Type) (x : X) : x = x := sorry
you would have to call it as quux Y y.
If you place an # before a name you turn all the implicit arguments into explicit arguments. So you can call #foobar Y y. “Conversely” if you want Lean to figure out Y by itself you can write an underscore: quux _ y.
The relevant section in TPIL: https://leanprover.github.io/theorem_proving_in_lean/dependent_type_theory.html#implicit-arguments
Arguments in () brackets is what you need to pass to the function yourself.
These are so called explicit arguments.
Arguments in {} and ⦃⦄ brackets are figured out by Lean's
unification system.
We use {} when Lean can deduce { argument } to the function based on ( another argument we provided ). {} and ⦃⦄ differ in timing of the unifier.
Arguments in [] brackets are figured out by Lean's type class inference system.
We use [] when Lean can deduce [ argument ] based on facts from mathlib.
A good article on brackets in Lean by Kevin Buzzard: https://www.ma.imperial.ac.uk/~buzzard/xena/formalising-mathematics-2022/Part_B/brackets.html.

What does one gain when redefining data constructors, and in place of what expressions might such a definition be substituted?

Reading this answer, I'm getting puzzled by the very first code fragment:
data Pair a = P a a
instance Functor Pair where
fmap f (P x y) = P (f x) (f y)
instance Monad Pair where
return x = P x x
P a b >>= f = P x y
where P x _ = f a
P _ y = f b
What I see is the author redefining a data constructor two times and applying it to undefined variables.
First off, how does the second of the two definitions of P (those two that are found in the where clause of the instance Monad definition) matter if, as I believe, the first one (whichever we put first) always matches?
Second, according to what syntax rules could the expression P x y get evaluated when there are no expressions for x and y in scope, but rather some kind of a redefinition of a data constructor that happens to mention these variables' names?
Interesting to note that, if I instead write like:
P a b >>= f = P x y
where P u _ = f a
P _ v = f b
— substituting u & v for x & y
— I will observe an error: Variable not in scope for each of x & y, even though by all sane intuition renaming a bound variable makes no difference.
The equation
P x y = z
does not define P, only x and y. This is the case in general: only the data and newtype keywords can introduce new constructors. All other equations define only term variables. To disambiguate between the two, constructors always begin with upper case (for the special case of operators, : counts as "upper case punctuation") and variables always begin with lower case.
So, the meaning of
P a b >>= f = P x y
where P x _ = f a
P _ y = f b
reads like this:
Define a new function named (>>=). It is defined when the first argument matches the pattern P a b (binding a and b to the actual values P is applied to in the first argument), and when the second argument matches the pattern f (that is, always, and binding f to the value of that second argument).
Apply f to a. Make sure the result of this matches the pattern P x _ (binding x to the first value P is applied to and throwing away the second).
Apply f to b. Make sure the result of this matches the pattern P _ y (binding y to the second value P is applied to and throwing away the first).
Return the value P x y.

How to compare two functions for equivalence, as in (λx.2*x) == (λx.x+x)?

Is there a way to compare two functions for equality? For example, (λx.2*x) == (λx.x+x) should return true, because those are obviously equivalent.
It's pretty well-known that general function equality is undecidable in general, so you'll have to pick a subset of the problem that you're interested in. You might consider some of these partial solutions:
Presburger arithmetic is a decidable fragment of first-order logic + arithmetic.
The universe package offers function equality tests for total functions with finite domain.
You can check that your functions are equal on a whole bunch of inputs and treat that as evidence for equality on the untested inputs; check out QuickCheck.
SMT solvers make a best effort, sometimes responding "don't know" instead of "equal" or "not equal". There are several bindings to SMT solvers on Hackage; I don't have enough experience to suggest a best one, but Thomas M. DuBuisson suggests sbv.
There's a fun line of research on deciding function equality and other things on compact functions; the basics of this research is described in the blog post Seemingly impossible functional programs. (Note that compactness is a very strong and very subtle condition! It's not one that most Haskell functions satisfy.)
If you know your functions are linear, you can find a basis for the source space; then every function has a unique matrix representation.
You could attempt to define your own expression language, prove that equivalence is decidable for this language, and then embed that language in Haskell. This is the most flexible but also the most difficult way to make progress.
This is undecidable in general, but for a suitable subset, you can indeed do it today effectively using SMT solvers:
$ ghci
GHCi, version 8.0.1: http://www.haskell.org/ghc/ :? for help
Prelude> :m Data.SBV
Prelude Data.SBV> (\x -> 2 * x) === (\x -> x + x :: SInteger)
Q.E.D.
Prelude Data.SBV> (\x -> 2 * x) === (\x -> 1 + x + x :: SInteger)
Falsifiable. Counter-example:
s0 = 0 :: Integer
For details, see: https://hackage.haskell.org/package/sbv
In addition to practical examples given in the other answer, let us pick the subset of functions expressible in typed lambda calculus; we can also allow product and sum types. Although checking whether two functions are equal can be as simple as applying them to a variable and comparing results, we cannot build the equality function within the programming language itself.
ETA: λProlog is a logic programming language for manipulating (typed lambda calculus) functions.
2 years have passed, but I want to add a little remark to this question. Originally, I asked if there is any way to tell if (λx.2*x) is equal to (λx.x+x). Addition and multiplication on the λ-calculus can be defined as:
add = (a b c -> (a b (a b c)))
mul = (a b c -> (a (b c)))
Now, if you normalize the following terms:
add_x_x = (λx . (add x x))
mul_x_2 = (mul (λf x . (f (f x)))
You get:
result = (a b c -> (a b (a b c)))
For both programs. Since their normal forms are equal, both programs are obviously equal. While this doesn't work in general, it does work for many terms in practice. (λx.(mul 2 (mul 3 x)) and (λx.(mul 6 x)) both have the same normal forms, for example.
In a language with symbolic computation like Mathematica:
Or C# with a computer algebra library:
MathObject f(MathObject x) => x + x;
MathObject g(MathObject x) => 2 * x;
{
var x = new Symbol("x");
Console.WriteLine(f(x) == g(x));
}
The above displays 'True' at the console.
Proving two functions equal is undecidable in general but one can still prove functional equality in special cases as in your question.
Here's a sample proof in Lean
def foo : (λ x, 2 * x) = (λ x, x + x) :=
begin
apply funext, intro x,
cases x,
{ refl },
{ simp,
dsimp [has_mul.mul, nat.mul],
have zz : ∀ a : nat, 0 + a = a := by simp,
rw zz }
end
One can do the same in other dependently typed language such as Coq, Agda, Idris.
The above is a tactic style proof. The actual definition of foo (the proof) that gets generated is quite a mouthful to be written by hand:
def foo : (λ (x : ℕ), 2 * x) = λ (x : ℕ), x + x :=
funext
(λ (x : ℕ),
nat.cases_on x (eq.refl (2 * 0))
(λ (a : ℕ),
eq.mpr
(id_locked
((λ (a a_1 : ℕ) (e_1 : a = a_1) (a_2 a_3 : ℕ) (e_2 : a_2 = a_3), congr (congr_arg eq e_1) e_2)
(2 * nat.succ a)
(nat.succ a * 2)
(mul_comm 2 (nat.succ a))
(nat.succ a + nat.succ a)
(nat.succ a + nat.succ a)
(eq.refl (nat.succ a + nat.succ a))))
(id_locked
(eq.mpr
(id_locked
(eq.rec (eq.refl (0 + nat.succ a + nat.succ a = nat.succ a + nat.succ a))
(eq.mpr
(id_locked
(eq.trans
(forall_congr_eq
(λ (a : ℕ),
eq.trans
((λ (a a_1 : ℕ) (e_1 : a = a_1) (a_2 a_3 : ℕ) (e_2 : a_2 = a_3),
congr (congr_arg eq e_1) e_2)
(0 + a)
a
(zero_add a)
a
a
(eq.refl a))
(propext (eq_self_iff_true a))))
(propext (implies_true_iff ℕ))))
trivial
(nat.succ a))))
(eq.refl (nat.succ a + nat.succ a))))))

Why is this symmetry assertion wrong?

I am really confused over why there is always a counter example to my following assertion.
//assertions must NEVER by wrong
assert Symmetric{
all r: univ -> univ | some ~r iff (some x, y: univ | x not in y and y not in x and
(x->y in r) and (y->x in r))
}
check Symmetric
The counter-example always shows 1 element in univ set. However, this should not be the case since I specified that there will be some ~r iff x not in y and y not in x. The only element should not satisfy this statement.
Yet why does the model keep showing a counterexample to my assertion?
---INSTANCE---
integers={}
univ={Univ$0}
Int={}
seq/Int={}
String={}
none={}
this/Univ={Univ$0}
skolem $Symmetric_r={Univ$0->Univ$0}
Would really appreciate some guidance!
In Alloy, assertions are used to check the correctness of logic sentences (properties of your model), not to specify properties that should always hold in your model. So you didn't specify that
there will be some ~r iff x not in y and y not in x
you instead asked Alloy whether it is true that for all binary relations r, some ~r iff x not in y and y not in x [...], and Alloy answered that it is not true, and gave you a concrete example (counterexample) in which that property doesn't hold.
A couple other points
some ~r doesn't mean "r is symmetric"; it simply means that the transpose of r is non-empty, which is not the same. A binary relation is symmetric if it is equal to its transpose, so you can write r = ~r to express that;
instead of some x, y: univ | x not in y and y not in x and [...] you can equivalently write some disj x, y: univ | [...];
however, that some expression doesn't really express the symmetry property, because all it says is that "there are some x, y such that both x->y and y->x are in r"; instead, you want to say something like "for all x, y, if x->y is in r, then y->x is in r too".

Why inductive datatypes forbid types like `data Bad a = C (Bad a -> a)` where the type recursion occurs in front of ->?

Agda manual on Inductive Data Types and Pattern Matching states:
To ensure normalisation, inductive occurrences must appear in strictly positive positions. For instance, the following datatype is not allowed:
data Bad : Set where
bad : (Bad → Bad) → Bad
since there is a negative occurrence of Bad in the argument to the constructor.
Why is this requirement necessary for inductive data types?
The data type you gave is special in that it is an embedding of the untyped lambda calculus.
data Bad : Set where
bad : (Bad → Bad) → Bad
unbad : Bad → (Bad → Bad)
unbad (bad f) = f
Let's see how. Recall, the untyped lambda calculus has these terms:
e := x | \x. e | e e'
We can define a translation [[e]] from untyped lambda calculus terms to Agda terms of type Bad (though not in Agda):
[[x]] = x
[[\x. e]] = bad (\x -> [[e]])
[[e e']] = unbad [[e]] [[e']]
Now you can use your favorite non-terminating untyped lambda term to produce a non-terminating term of type Bad. For example, we could translate (\x. x x) (\x. x x) to the non-terminating expression of type Bad below:
unbad (bad (\x -> unbad x x)) (bad (\x -> unbad x x))
Although the type happened to be a particularly convenient form for this argument, it can be generalized with a bit of work to any data type with negative occurrences of recursion.
An example how such a data type allows us to inhabit any type is given in Turner, D.A. (2004-07-28), Total Functional Programming, sect. 3.1, page 758 in Rule 2: Type recursion must be covariant."
Let's make a more elaborate example using Haskell. We'll start with a "bad" recursive data type
data Bad a = C (Bad a -> a)
and construct the Y combinator from it without any other form of recursion. This means that having such a data type allows us to construct any kind of recursion, or inhabit any type by an infinite recursion.
The Y combinator in the untyped lambda calculus is defined as
Y = λf.(λx.f (x x)) (λx.f (x x))
The key to it is that we apply x to itself in x x. In typed languages this is not directly possible, because there is no valid type x could possibly have. But our Bad data type allows this modulo adding/removing the constructor:
selfApp :: Bad a -> a
selfApp (x#(C x')) = x' x
Taking x :: Bad a, we can unwrap its constructor and apply the function inside to x itself. Once we know how to do this, it's easy to construct the Y combinator:
yc :: (a -> a) -> a
yc f = let fxx = C (\x -> f (selfApp x)) -- this is the λx.f (x x) part of Y
in selfApp fxx
Note that neither selfApp nor yc are recursive, there is no recursive call of a function to itself. Recursion appears only in our recursive data type.
We can check that the constructed combinator indeed does what it should. We can make an infinite loop:
loop :: a
loop = yc id
or compute let's say GCD:
gcd' :: Int -> Int -> Int
gcd' = yc gcd0
where
gcd0 :: (Int -> Int -> Int) -> (Int -> Int -> Int)
gcd0 rec a b | c == 0 = b
| otherwise = rec b c
where c = a `mod` b

Resources