How to prove example in Lean? - lean

Below is my wrong attempt at it. Any suggestions on how to procede?
def le(f g : ℕ) : Prop := ∃ a : ℕ , a + f = g
local notation a ≤ b := le a b
example : ∀ f g : ℕ , f ≤ g → g ≤ f → f = g :=
begin
assume f g fg gf,
dsimp[le.nat] at fg,
dsimp[le.nat] at gf,
cases fg with a b,
cases gf with c d,
rewrite← b at d,
rewrite← d at b,
induction f with f' ih,
rewrite← d,
ring,
ring at d,
rewrite← b,
ring,
ring at b,
rewrite d,
ring,
cases a,
refl,
rewrite← d,
cases c,
ring,
contradiction,
cases c,
ring at d,
rewrite← d,
cases a,
ring,
ring at d,
ring at b,
exact b,
end

If you import tactic then
example : ∀ f g : ℕ , f ≤ g → g ≤ f → f = g :=
begin
rintro f g ⟨a, rfl⟩ ⟨b, hb⟩,
omega
end

If for pedagogical reasons you want to avoid omega:
example : ∀ f g : ℕ , f ≤ g → g ≤ f → f = g :=
begin
rintro f g ⟨a, rfl⟩ ⟨b, hb⟩,
rw ←add_assoc at hb,
conv_rhs at hb {rw ←zero_add f},
obtain ⟨-, rfl⟩ := nat.eq_zero_of_add_eq_zero (nat.add_right_cancel hb),
rw zero_add,
end

Related

Comparing equal types in a definition

I'm fairly new to Lean, so apologies if this is obvious. I'm trying to learn Lean and category theory, by doing some category theory exercises in Lean. I have these definitions for arrows and categories:
variable {α : Type u}
inductive Arrow : α → α → Type u
| Id (x : α) : Arrow x x
| Comp (g : Arrow b c) (f : Arrow a b) : Arrow a c
notation a " -→ " b => Arrow a b
notation a " ∘ " b => Arrow.Comp a b
structure Category :=
(assoc {a b c d : α} : ∀ (f : a -→ b) (g : b -→ c) (k : c -→ d),
(k ∘ (g ∘ f)) = ((k ∘ g) ∘ f))
(unitl {a b : α} : ∀ (f : a -→ b), ((Arrow.Id b) ∘ f) = f)
(unitr {a b : α} : ∀ (f : a -→ b), (f ∘ (Arrow.Id a)) = f)
This all compiles fine, so I try to define a discrete category as follows:
def IsDiscrete :=
∀ (x y : α) (f : Arrow x y), x = y ∧ f = Arrow.Id x
The intent is to express "all arrows are identity arrows", but the compiler complains that f has type "Arrow x y", not "Arrow x x". Of course, the whole point is that if arrow f exists, then x = y, so the comparison between f and Id x is sensible. How do I express this in Lean?
Alternatively, is there a better way to express arrows and/or categories in Lean? If so, why is that way better?

Understanding Haskell Type Signature

I have to convert a Haskell type signature into a term.
The type signature is :
f :: (a -> b -> c) -> (d -> b) -> (d -> a) -> d -> c
The correct resulting term is :
f g h j x = g (j x) (h x)
and here lies my problem as I understand it g is a function which returns a function which returns c and c is function which returns a function d which returns b and b is a function which returns itself which then returns itself again which then returns c.
Correct me if i am wrong.
What I don't get is why is g taking (j x) as first argument and (h x) as second argument. Shouldn't it be the other way around? Haskell is right associative and h is the secound parameter given to the function f and not j.
g :: a -> b -> c, h :: d -> b, j :: d -> a, and x :: d are all independent arguments to f; their order implies nothing about how we might end up using them in the definition of f.
To start, we know that f uses its arguments to return a value of type c. But none of the arguments have a value of type c; the only way to get a value of type c is to use g. But in order to use g, you need arguments of type a and type b, and none of f's arguments have those types. But we could use h and j to get them, if we had an argument of type d to apply them to, and lo and behold, we do have a value of type d: the argument x!.
f g h j x = let aValue = j x
bValue = h x
cValue = g aValue bValue
in cValue
which can be flattened to the original answer of
f g h j x = g (j x) (h x)
If you want to think of the return value of f as being d -> c, rather than just c, you can eliminate x from the definition with some point-free trickery.
f g h j = g <$> j <*> h -- liftA2 g j h
You can even go a little further to remove h and j as arguments, but the result, though simple, is even more incomprehensible:
f = flip . liftA2
Moral of the story: sometimes point-free style abstracts away distracting details, other times it completely obscures the meaning of the function.

Nested pattern matching in Lean for destructing hypothesis

Let us look at the example of some lemma (whose statement and whether it is true or not is irrelevant for this discussion):
lemma L1 : forall (n m: ℕ) (p : ℕ → Prop), (p n ∧ ∃ (u:ℕ), p u ∧ p m) ∨ (¬p n ∧ p m) → n = m :=
begin
intros n m p H, cases H with H H,
{cases H with H1 H2, cases H2 with u H2, cases H2 with H2 H3, sorry},
{cases H with H1 H2, sorry}
end
The point I wish to highlight here is when destructing my hypothesis with the cases tactic,
I did not know any other way but to use the tactic several times (once for each 'layer' so to speak).
If I look at the same lemma in Coq:
Lemma L1 : forall (n m:nat) (p:nat -> Prop),
(p n /\ exists (u:nat), p u /\ p m) \/ (~p n /\ p m) -> n = m.
Proof.
intros n m p [[H1 [u [H2 H3]]]|[H1 H2]].
- admit.
-
Show.
I am able to destruct my assumption with a single nested pattern match.
I am guessing I can do the same sort of thing in Lean but I do not know how. I would be grateful to be told as I find the nested pattern match very convenient in practice.
You'll need mathlib for this, and import tactic.rcases. You can use the rcases tactic.
import tactic.rcases
lemma L1 : forall (n m: ℕ) (p : ℕ → Prop), (p n ∧ ∃ (u:ℕ), p u ∧ p m) ∨ (¬p n ∧ p m) → n = m :=
begin
intros n m p H,
rcases H with ⟨H1, u, H2, H3⟩ | ⟨H1, H2⟩,
end

Membership proofs for AVL trees

I'm struggling a little to come up with a notion of membership proof for Data.AVL trees. I would like to be able to pass around a value of type n ∈ m, to mean that n appears as a key in in the AVL tree, so that given such a proof, get n m can always successfully yield a value associated with n. You can assume my AVL trees always contain values drawn from a join semilattice (A, ∨, ⊥) over a setoid (A, ≈), although below the idempotence is left implicit.
module Temp where
open import Algebra.FunctionProperties
open import Algebra.Structures renaming (IsCommutativeMonoid to IsCM)
open import Data.AVL
open import Data.List
open import Data.Nat hiding (_⊔_)
import Data.Nat.Properties as ℕ-Prop
open import Data.Product hiding (_-×-_)
open import Function
open import Level
open import Relation.Binary renaming (IsEquivalence to IsEq)
open import Relation.Binary.PropositionalEquality
module ℕ-AVL {v} (V : Set v)
= Data.AVL (const V) (StrictTotalOrder.isStrictTotalOrder ℕ-Prop.strictTotalOrder)
data ≈-List {a ℓ : Level} {A : Set a} (_≈_ : Rel A ℓ) : Rel (List A) (a ⊔ ℓ) where
[] : ≈-List _≈_ [] []
_∷_ : {x y : A} {xs ys : List A} → (x≈y : x ≈ y) → (xs≈ys : ≈-List _≈_ xs ys) → ≈-List _≈_ (x ∷ xs) (y ∷ ys)
_-×-_ : {a b c d ℓ₁ ℓ₂ : Level} {A : Set a} {B : Set b} {C : Set c} {D : Set d} →
REL A C ℓ₁ → REL B D ℓ₂ → A × B → C × D → Set (ℓ₁ ⊔ ℓ₂)
(R -×- S) (a , b) (c , d) = R a c × S b d
-- Two AVL trees are equal iff they have equal toList images.
≈-AVL : {a ℓ : Level} {A : Set a} → Rel A ℓ → Rel (ℕ-AVL.Tree A) (a ⊔ ℓ)
≈-AVL _≈_ = ≈-List ( _≡_ -×- _≈_ ) on (ℕ-AVL.toList _)
_∈_ : {a ℓ : Level} {A : Set a} {_≈_ : Rel A ℓ} {_∨_ : Op₂ A} {⊥ : A}
{{_ : IsCM _≈_ _∨_ ⊥}} → ℕ → ℕ-AVL.Tree A → Set (a ⊔ ℓ)
n ∈ m = {!!}
get : {a ℓ : Level} {A : Set a} {_≈_ : Rel A ℓ} {_∨_ : Op₂ A} {⊥ : A} →
{{_ : IsCM _≈_ _∨_ ⊥}} → (n : ℕ) → (m : ℕ-AVL.Tree A) → n ∈ m → A
get n m n∈m = {!!}
I feel like this should be easy, but I'm finding it hard. One option would be to use my notion of equivalence for AVL-trees, which says that two trees are equal iff they have the same toList image, and exploit the commutative monoid over A, defining
n ∈ m = ≈-AVL ≈ m (ℕ-AVL.unionWith _ ∨ m (ℕ-AVL.singleton _ n ⊥))
This essentially says that m contains n iff the singleton map (n, ⊥) is "below" m in the partial order induced by the commutative monoid (technically we need the idempotence for this interpretation to make sense). However given such a definition I'm not at all sure how to implement get.
I have also experimented with defining my own inductive ∈ relation but found that hard as I seemed to end up having to know about the complicated internal indices used by Data.AVL.
Finally I also tried using a value of type n ∈? m ≡ true, where ∈? is defined in Data.AVL, but didn't have much success there either. I would appreciate any suggestions.
I think your best bet is to define _∈_ as an inductive relation. Yes, this requires you to know the internals of Data.AVL, but I'm fairly sure this will be the most pleasant representation to work with.
The internal structure of Data.AVL is actually quite simple. We have a type Indexed.Tree, which is indexed by three values: the lower bound, the upper bound and the height. Given a tree t : Indexed.Tree lb ub h, all values inside t are within the range (lb, ub).
There's a slight twist to it, though: since we need to have a tree that can contain arbitrary values, we need to artifically extend the _<_ relation given by isStrictTotalOrder with two new values - you can think of those as a negative and a positive infinity. In the Data.AVL module, these are called ⊥⁺ and ⊤⁺. Trees that can contain arbitrary values are then of type Tree ⊥⁺ ⊤⁺ h.
The last piece is the balancing: each node requires heights of its subtrees to be at most one level apart. We don't actually need to touch balancing, but the function signatures might mention it.
Anyways, I'm working directly with this raw (indexed) variant. The opaque, unindexed version is just something like:
data Tree : Set ? where
tree : ∀ {h} → Indexed.Tree ⊥⁺ ⊤⁺ h
Some boilerplate first:
open import Data.Empty
open import Data.Product
open import Level
open import Relation.Binary
open import Relation.Binary.PropositionalEquality as P using (_≡_)
open import Relation.Nullary
import Data.AVL
module Membership
{k v ℓ}
{Key : Set k} (Value : Key → Set v)
{_<_ : Rel Key ℓ}
(isStrictTotalOrder : IsStrictTotalOrder _≡_ _<_) where
open Data.AVL Value isStrictTotalOrder public
open Extended-key public
open Height-invariants public
open IsStrictTotalOrder isStrictTotalOrder
Here's the _∈_ as an inductive relation:
data _∈_ {lb ub} (K : Key) :
∀ {n} → Indexed.Tree lb ub n → Set (k ⊔ v ⊔ ℓ) where
here : ∀ {hˡ hʳ} V
{l : Indexed.Tree lb [ K ] hˡ}
{r : Indexed.Tree [ K ] ub hʳ}
{b : hˡ ∼ hʳ} →
K ∈ Indexed.node (K , V) l r b
left : ∀ {hˡ hʳ K′} {V : Value K′}
{l : Indexed.Tree lb [ K′ ] hˡ}
{r : Indexed.Tree [ K′ ] ub hʳ}
{b : hˡ ∼ hʳ} →
K < K′ →
K ∈ l →
K ∈ Indexed.node (K′ , V) l r b
right : ∀ {hˡ hʳ K′} {V : Value K′}
{l : Indexed.Tree lb [ K′ ] hˡ}
{r : Indexed.Tree [ K′ ] ub hʳ}
{b : hˡ ∼ hʳ} →
K′ < K →
K ∈ r →
K ∈ Indexed.node (K′ , V) l r b
This is the sort of inductive definition you'd expect: either the key is in this inner node or it's down in one of the subtrees. We could also do without the less-than, greater-than proofs, but this is more convenient - when you want to show that a tree does not contain a particular element, you only have to track the path lookup would take, instead of searching the whole tree.
How to interpret those l and r implicit arguments? Notice that is makes perfect sense: we have a key K and naturally we require that the values contained in l fall between lb and K (it's actually [ K ], since we are using the extended _<_) and the values in r fall between K and ub. The balancing (b : hˡ ∼ hʳ) is there just so we can construct an actual tree node.
Your get function is then very simple:
get : ∀ {h lb ub n} {m : Indexed.Tree lb ub h} → n ∈ m → Value n
get (here V) = V
get (left _ p) = get p
get (right _ p) = get p
Well, I told you that this representation is fairly convenient to work it and I'm going to prove it. One of the properties we'd like _∈_ to have is decidability: that is, we can construct a program that tells us whether an element is in a tree or not:
find : ∀ {h lb ub} n (m : Indexed.Tree lb ub h) → Dec (n ∈ m)
find will return either yes p, where p is a proof that n is inside m (n ∈ m), or no ¬p, where ¬p is refutation of n ∈ m, n ∈ m → ⊥. We'll need one lemma:
lem : ∀ {lb ub hˡ hʳ K′ n} {V : Value K′}
{l : Indexed.Tree lb [ K′ ] hˡ}
{r : Indexed.Tree [ K′ ] ub hʳ}
{b : hˡ ∼ hʳ} →
n ∈ Indexed.node (K′ , V) l r b →
(n ≯ K′ → n ≢ K′ → n ∈ l) × (n ≮ K′ → n ≢ K′ → n ∈ r)
lem (here V) =
(λ _ eq → ⊥-elim (eq P.refl)) , (λ _ eq → ⊥-elim (eq P.refl))
lem (left x p) = (λ _ _ → p) , (λ ≮ _ → ⊥-elim (≮ x))
lem (right x p) = (λ ≯ _ → ⊥-elim (≯ x)) , (λ _ _ → p)
This tells us that if we know n is contained in t and we know n is less than the key of the root of t, then n must be in the left subtree (and similarly for right subtree).
Here's the implementation of find function:
find : ∀ {h lb ub} n (m : Indexed.Tree lb ub h) → Dec (n ∈ m)
find n (Indexed.leaf _) = no λ ()
find n (Indexed.node (k , v) l r _) with compare n k
find n (Indexed.node (k , v) l r _) | tri< a ¬b ¬c with find n l
... | yes p = yes (left a p)
... | no ¬p = no λ ¬∈l → ¬p (proj₁ (lem ¬∈l) ¬c ¬b)
find n (Indexed.node (k , v) l r _) | tri≈ ¬a b ¬c
rewrite (P.sym b) = yes (here v)
find n (Indexed.node (k , v) l r _) | tri> ¬a ¬b c with find n r
... | yes p = yes (right c p)
... | no ¬p = no λ ¬∈r → ¬p (proj₂ (lem ¬∈r) ¬a ¬b)
The implementation is fairly straightforward, but I would suggest loading it up in Emacs, trying to replace some of the right-hand-sides with holes and see what the types are. And finally, here are some tests:
open import Data.Nat
open import Data.Nat.Properties
open Membership
(λ _ → ℕ)
(StrictTotalOrder.isStrictTotalOrder strictTotalOrder)
un-tree : Tree → ∃ λ h → Indexed.Tree ⊥⁺ ⊤⁺ h
un-tree (tree t) = , t
test : Indexed.Tree _ _ _
test = proj₂ (un-tree
(insert 5 55 (insert 7 77 (insert 4 44 empty))))
Extract : ∀ {p} {P : Set p} → Dec P → Set _
Extract {P = P} (yes _) = P
Extract {P = P} (no _) = ¬ P
extract : ∀ {p} {P : Set p} (d : Dec P) → Extract d
extract (yes p) = p
extract (no ¬p) = ¬p
∈-test₁ : ¬ (2 ∈ test)
∈-test₁ = extract (find 2 test)
∈-test₂ : 4 ∈ test
∈-test₂ = extract (find 4 test)

F# version of haskell pattern match

How do I do this haskell in F# cleanly?
add 1 2 x = 3 + x
add 1 x y = 1 + x + y
add z x y = z + x + y
You can't overload the function itself, but you can use pattern matching directly:
let add z x y = // curried multiple parameters
match z, x, y with // convert to three-tuple to match on
| 1, 2, x -> 3 + x
| 1, x, y -> 1 + x + y
| z, x, y -> z + x + y
Usage is as expected: add 1 2 3
If you're willing to use tuples as arguments (ie forgo currying and partial application), you can even write it more shorthand:
let add = // expect three-tuple as first (and only) parameter
function // use that one value directly to match on
| 1, 2, x -> 3 + x
| 1, x, y -> 1 + x + y
| z, x, y -> z + x + y
Usage now is: add (1, 2, 3)
Recall in Haskell that the general form of functions as a list of declarations with patterns:
f pat1 ... = e1
f pat2 ... = e2
f pat3 ... = e3
is just sugar for the case analysis:
f x1 .. xn = case (x1, .. xn) of
(pat1, ..., patn) -> e1
(pat2, ..., patn) -> e2
(pat3, ..., patn) -> e3
so the same translation can be made to other languages with pattern matching but without declaration-level patterns.
This is purely syntactic. Languages like Haskell, Standard ML and Mathematica allow you to write out different match cases as if they were different functions:
factorial 0 = 1
factorial 1 = 1
factorial n = n * factorial(n-1)
whereas languages like OCaml and F# require you to have a single function definition and use match or equivalent in its body:
let factorial = function
| 0 -> 1
| 1 -> 1
| n -> n * factorial(n-1)
Note that you don't have to copy the function name over and over again using this syntax and you can factor match cases more easily:
let factorial = function
| 0 | 1 -> 1
| n -> n * factorial(n-1)
As yamen wrote, do currying with let f a b = match a, b with ... in F#.
In the classic red-black tree implementation, I find the duplication of the function names and right-hand sides in Standard ML and Haskell quite ugly:
balance :: RB a -> a -> RB a -> RB a
balance (T R a x b) y (T R c z d) = T R (T B a x b) y (T B c z d)
balance (T R (T R a x b) y c) z d = T R (T B a x b) y (T B c z d)
balance (T R a x (T R b y c)) z d = T R (T B a x b) y (T B c z d)
balance a x (T R b y (T R c z d)) = T R (T B a x b) y (T B c z d)
balance a x (T R (T R b y c) z d) = T R (T B a x b) y (T B c z d)
balance a x b = T B a x b
compared to the equivalent OCaml or F#:
let balance = function
| B, z, (T(R, y, T(R, x, a, b), c) | T(R, x, a, T(R, y, b, c))), d
| B, x, a, (T(R, z, T(R, y, b, c), d) | T(R, y, b, T(R, z, c, d))) ->
T(R, y, T(B, x, a, b), T(B, z, c, d))
| a, b, c, d -> T(a, b, c, d)

Resources