Let us look at the example of some lemma (whose statement and whether it is true or not is irrelevant for this discussion):
lemma L1 : forall (n m: ℕ) (p : ℕ → Prop), (p n ∧ ∃ (u:ℕ), p u ∧ p m) ∨ (¬p n ∧ p m) → n = m :=
begin
intros n m p H, cases H with H H,
{cases H with H1 H2, cases H2 with u H2, cases H2 with H2 H3, sorry},
{cases H with H1 H2, sorry}
end
The point I wish to highlight here is when destructing my hypothesis with the cases tactic,
I did not know any other way but to use the tactic several times (once for each 'layer' so to speak).
If I look at the same lemma in Coq:
Lemma L1 : forall (n m:nat) (p:nat -> Prop),
(p n /\ exists (u:nat), p u /\ p m) \/ (~p n /\ p m) -> n = m.
Proof.
intros n m p [[H1 [u [H2 H3]]]|[H1 H2]].
- admit.
-
Show.
I am able to destruct my assumption with a single nested pattern match.
I am guessing I can do the same sort of thing in Lean but I do not know how. I would be grateful to be told as I find the nested pattern match very convenient in practice.
You'll need mathlib for this, and import tactic.rcases. You can use the rcases tactic.
import tactic.rcases
lemma L1 : forall (n m: ℕ) (p : ℕ → Prop), (p n ∧ ∃ (u:ℕ), p u ∧ p m) ∨ (¬p n ∧ p m) → n = m :=
begin
intros n m p H,
rcases H with ⟨H1, u, H2, H3⟩ | ⟨H1, H2⟩,
end
Related
I am dealing with the following grammar, which I have implemented in the form of a Haskell data type.
bool ::= tt | ff | bool ∧ bool | var
var ::= letter{letter|digit}*
My question is, I would like to write a function simplify :: bool → bool which simplifies boolean expressions in the usual way (while doing nothing to variables). For example, I would like
simplify(tt ∧ ff) = ff
simplify(tt ∧ x) = x
simplify(x ∧ (y ∧ z)) = x ∧ y ∧ z
where the letters x, y and z are denoting variables (var).
I feel that the natural definition is the following (pseudocode with pattern matching)
simplify(tt) = tt
simplify(ff) = ff
simplify(x) = x
simplify(tt ∧ b) = simplify(b)
simplify(b ∧ tt) = simplify(b)
simplify(b₁ ∧ b₂) = simplify(b₁) ∧ simplify(b₂) (†)
where b, b₁ and b₂ denote bools, and x denotes a var.
This definition works fine for all the given examples above. The problem is with expressions such as (tt ∧ tt) ∧ (tt ∧ tt). Indeed, applying the definition, we have
simplify((tt ∧ tt) ∧ (tt ∧ tt)) = simplify(tt ∧ tt) ∧ simplify(tt ∧ tt)
= simplify(tt) ∧ simplify(tt)
= tt ∧ tt
which we should be able to further simplify as simply tt.
Thus maybe changing the definition line (†) to
simplify(b₁ ∧ b₂) = simplify(simplify(b₁) ∧ simplify(b₂))
will solve the problem, since it simplifies the results of conjunctions, which does actually work! But then it breaks when we have variables (it goes into an infinite loop in fact):
simplify(x ∧ y) = simplify(simplify(x) ∧ simplify(y))
= simplify(x ∧ y)
= ...
Thus my idea was to retain the old definition, but then actually simplify by finding fixed points. Indeed, the function simplify' :: bool → bool written in Haskell below, behaves as desired:
simplify' :: BoolExpr -> BoolExpr
simplify' f
| (simplify f) == f = f
| otherwise = simplify' (simplify f)
It just feels like an inelegant solution to the problem, since it keeps repeatedly running a function which feels like, if defined correctly, needs to be run only once. I appreciate any feedback.
simplify(b₁ ∧ b₂) = simplify(simplify(b₁) ∧ simplify(b₂))
will solve the problem, since it simplifies the results of conjunctions, which does actually work! But then it breaks when we have variables (it goes into an infinite loop in fact):
Do you really want to recurse over simplify(b₁) ∧ simplify(b₂)? Maybe you want to simplify(b₁) and simplify(b₂) and then simply operate them. As in,
data B = T | F | V | B :&: B deriving Show
s :: B -> B
s T = T
s F = F
s V = V
s (b1 :&: b2) = opAND (s b1) (s b2)
opAND F _ = F
opAND _ F = F
opAND T b = b
opAND b T = b
opAND a b = a :&: b
The simplify function s essentially folds your syntax tree, at each step guaranteeing that you preserve the property that the simplified expression is either atomic, or contains no occurrences of neither F nor T.
The fundamental issue is that you're doing your simplify(tt ∧ b) test on unsimplified expressions.
The logic you're looking for would be more like:
simplify(a ^ b) | simplify(a) == tt = simplify b
which can be efficiently implemented by simplifying both before the simplifying pattern match:
simplify(b₁ ∧ b₂) =
case (simplify(b₁), simplify(b₂)) of
(tt, x) -> x
(x, tt) -> x
...
I have been trying to write a code which takes all the integers in a tree and return a sum of them. I'm trying to do this with type a, which is from a data time:
data Tree a = Nil | Value a (Tree a) (Tree a)
deriving Show
and we want to use:
tree = Value 2 (Value 2 (Value 2 Nil Nil) Nil) (Value 2 Nil Nil)
and my code is as follow:
countTree :: (a -> a -> a) -> a -> Tree a -> a
countTree p k (Nil) = h
countTree p k (Value x y z) = x (+) (countTree p k y) (+) (countTree p k z)
and I want to run my code as countTree (+) 0 tree and the results should return 8.
The problem is that when I run my code it tells me that x has four arguments but it's type a has zero which I honestly don't understand why. I've modifying sections of my code, but no success once so ever, I could really use some assistance.
x (+) (countTree p k y) (+) (countTree p k z)
is attempting to treat x as a function, and pass to it as arguments all of
(+) (countTree p k y) (+) (countTree p k z)
If you want to have "x + recur left + recur right", you'd want something like:
x + (countTree p k y) + (countTree p k z)
I'm pretty sure however you actually want to use p, not + hard coded. Using prefix notation, you'd have to rearrange it a bit to something like :
(p (p x (countTree p k y)) (countTree p k z))
Or, you could use backticks to inline the calls to p as #bipll suggested:
x `p` (countTree p k y) `p` (countTree p k z)
A side note, but I'm also pretty sure you want h to be k.
I have a recursive function* that is similar to an "optional map", with the following signature:
omap (f : option Z -> list nat) (l : list Z) : option (list nat)
I defined an equivalent (modulo list reversal) tail-recursive function (omap_tr below), and I would like to prove that both are equivalent, at least in the case Some.
I am currently failing to do so, either because my inductive invariant is not strong enough, or because I am not correctly applying the double induction. I wonder if there is a standard technique for this kind of transformation.
*The function has been simplified; for instance None seems useless here, but it is necessary in the original function.
Code
Here is the code of the (simplified) non-tail-recursive function, along with an example of a function f:
Fixpoint omap (f : option Z -> list nat) (l : list Z) : option (list nat) :=
match l with
| nil => Some nil
| z :: zr =>
let nr1 := f (Some z) in
let nr2 := match omap f zr with
| None => nil
| Some nr' => nr'
end in
Some (nr1 ++ nr2)
end.
Let f (oz : option Z) : list nat :=
match oz with
| None => nil
| Some z => Z.to_nat z :: nil
end.
For instance, omap f simply converts Z integers to nat integers:
Compute omap f (1 :: 2 :: 3 :: 4 :: nil)%Z.
= Some (1%nat :: 2%nat :: 3%nat :: 4%nat :: nil) : option (list nat)
I performed what I believe is a standard accumulator-based transformation, adding an acc parameter to both f and omap:
Fixpoint omap_tr (f_tr : option Z -> list nat -> list nat) (l : list Z)
(acc : list nat) : option (list nat) :=
match l with
| nil => Some acc
| z :: zr => let nr1 := f_tr (Some z) acc in
omap_tr f_tr zr nr1
end.
Let f_tr rz acc :=
match rz with
| None => acc
| Some z => Z.to_nat z :: acc
end.
It seems to work, despite returning a reversed list. Here's an example of its usage, with a non-empty accumulator:
Compute match omap_tr f_tr (3 :: 4 :: nil)%Z (rev (1 :: 2 :: nil))%nat with
| Some r => Some (rev r)
| None => None
end.
= Some (1%nat :: 2%nat :: 3%nat :: 4%nat :: nil) : option (list nat)
My first attempt included a nil accumulator:
Lemma omap_tr_failed:
forall l res,
omap_tr f_tr l nil = Some res ->
omap f l = Some (rev res).
But I failed to do the induction. I assume it must be because the invariant is not strong enough to handle the general case.
Still, it seems to me that any of the following lemmas should be provable, but I'm afraid they are also not strong enough to enable the proof:
Lemma omap_tr':
forall l acc res,
omap_tr f_tr l acc = Some (res ++ acc) ->
omap f l = Some (rev res).
Lemma omap_tr'':
forall l acc res,
omap_tr f_tr l acc = Some res ->
exists res',
omap f l = Some res' /\
res = (rev res') ++ acc.
Should a standard double induction allow these lemmas to be proven directly, or do I need stronger invariants?
Yes, your omap_tr'' invariant works perfectly for your lemma. Maybe you forgot to generalize over acc and res before doing induction, or forgot to apply some rewriting facts about app and rev?
Lemma omap_tr'':
forall l acc res,
omap_tr f_tr l acc = Some res ->
exists res',
omap f l = Some res' /\
res = (rev res') ++ acc.
Proof.
induction l as [|x l IH]; intros acc res; simpl.
- intros H. inversion H; subst acc; clear H.
exists []; eauto.
- intros H. apply IH in H.
destruct H as (res' & H & ?). subst res.
rewrite H.
eexists; split; eauto.
simpl. now rewrite <- app_assoc.
Qed.
Lemma omap_tr_correct :
forall l res,
omap_tr f_tr l [] = Some res ->
omap f l = Some (rev res).
Proof.
intros l res H. apply omap_tr'' in H.
destruct H as (res' & ? & E).
subst res.
now rewrite app_nil_r, rev_involutive.
Qed.
I'm struggling a little to come up with a notion of membership proof for Data.AVL trees. I would like to be able to pass around a value of type n ∈ m, to mean that n appears as a key in in the AVL tree, so that given such a proof, get n m can always successfully yield a value associated with n. You can assume my AVL trees always contain values drawn from a join semilattice (A, ∨, ⊥) over a setoid (A, ≈), although below the idempotence is left implicit.
module Temp where
open import Algebra.FunctionProperties
open import Algebra.Structures renaming (IsCommutativeMonoid to IsCM)
open import Data.AVL
open import Data.List
open import Data.Nat hiding (_⊔_)
import Data.Nat.Properties as ℕ-Prop
open import Data.Product hiding (_-×-_)
open import Function
open import Level
open import Relation.Binary renaming (IsEquivalence to IsEq)
open import Relation.Binary.PropositionalEquality
module ℕ-AVL {v} (V : Set v)
= Data.AVL (const V) (StrictTotalOrder.isStrictTotalOrder ℕ-Prop.strictTotalOrder)
data ≈-List {a ℓ : Level} {A : Set a} (_≈_ : Rel A ℓ) : Rel (List A) (a ⊔ ℓ) where
[] : ≈-List _≈_ [] []
_∷_ : {x y : A} {xs ys : List A} → (x≈y : x ≈ y) → (xs≈ys : ≈-List _≈_ xs ys) → ≈-List _≈_ (x ∷ xs) (y ∷ ys)
_-×-_ : {a b c d ℓ₁ ℓ₂ : Level} {A : Set a} {B : Set b} {C : Set c} {D : Set d} →
REL A C ℓ₁ → REL B D ℓ₂ → A × B → C × D → Set (ℓ₁ ⊔ ℓ₂)
(R -×- S) (a , b) (c , d) = R a c × S b d
-- Two AVL trees are equal iff they have equal toList images.
≈-AVL : {a ℓ : Level} {A : Set a} → Rel A ℓ → Rel (ℕ-AVL.Tree A) (a ⊔ ℓ)
≈-AVL _≈_ = ≈-List ( _≡_ -×- _≈_ ) on (ℕ-AVL.toList _)
_∈_ : {a ℓ : Level} {A : Set a} {_≈_ : Rel A ℓ} {_∨_ : Op₂ A} {⊥ : A}
{{_ : IsCM _≈_ _∨_ ⊥}} → ℕ → ℕ-AVL.Tree A → Set (a ⊔ ℓ)
n ∈ m = {!!}
get : {a ℓ : Level} {A : Set a} {_≈_ : Rel A ℓ} {_∨_ : Op₂ A} {⊥ : A} →
{{_ : IsCM _≈_ _∨_ ⊥}} → (n : ℕ) → (m : ℕ-AVL.Tree A) → n ∈ m → A
get n m n∈m = {!!}
I feel like this should be easy, but I'm finding it hard. One option would be to use my notion of equivalence for AVL-trees, which says that two trees are equal iff they have the same toList image, and exploit the commutative monoid over A, defining
n ∈ m = ≈-AVL ≈ m (ℕ-AVL.unionWith _ ∨ m (ℕ-AVL.singleton _ n ⊥))
This essentially says that m contains n iff the singleton map (n, ⊥) is "below" m in the partial order induced by the commutative monoid (technically we need the idempotence for this interpretation to make sense). However given such a definition I'm not at all sure how to implement get.
I have also experimented with defining my own inductive ∈ relation but found that hard as I seemed to end up having to know about the complicated internal indices used by Data.AVL.
Finally I also tried using a value of type n ∈? m ≡ true, where ∈? is defined in Data.AVL, but didn't have much success there either. I would appreciate any suggestions.
I think your best bet is to define _∈_ as an inductive relation. Yes, this requires you to know the internals of Data.AVL, but I'm fairly sure this will be the most pleasant representation to work with.
The internal structure of Data.AVL is actually quite simple. We have a type Indexed.Tree, which is indexed by three values: the lower bound, the upper bound and the height. Given a tree t : Indexed.Tree lb ub h, all values inside t are within the range (lb, ub).
There's a slight twist to it, though: since we need to have a tree that can contain arbitrary values, we need to artifically extend the _<_ relation given by isStrictTotalOrder with two new values - you can think of those as a negative and a positive infinity. In the Data.AVL module, these are called ⊥⁺ and ⊤⁺. Trees that can contain arbitrary values are then of type Tree ⊥⁺ ⊤⁺ h.
The last piece is the balancing: each node requires heights of its subtrees to be at most one level apart. We don't actually need to touch balancing, but the function signatures might mention it.
Anyways, I'm working directly with this raw (indexed) variant. The opaque, unindexed version is just something like:
data Tree : Set ? where
tree : ∀ {h} → Indexed.Tree ⊥⁺ ⊤⁺ h
Some boilerplate first:
open import Data.Empty
open import Data.Product
open import Level
open import Relation.Binary
open import Relation.Binary.PropositionalEquality as P using (_≡_)
open import Relation.Nullary
import Data.AVL
module Membership
{k v ℓ}
{Key : Set k} (Value : Key → Set v)
{_<_ : Rel Key ℓ}
(isStrictTotalOrder : IsStrictTotalOrder _≡_ _<_) where
open Data.AVL Value isStrictTotalOrder public
open Extended-key public
open Height-invariants public
open IsStrictTotalOrder isStrictTotalOrder
Here's the _∈_ as an inductive relation:
data _∈_ {lb ub} (K : Key) :
∀ {n} → Indexed.Tree lb ub n → Set (k ⊔ v ⊔ ℓ) where
here : ∀ {hˡ hʳ} V
{l : Indexed.Tree lb [ K ] hˡ}
{r : Indexed.Tree [ K ] ub hʳ}
{b : hˡ ∼ hʳ} →
K ∈ Indexed.node (K , V) l r b
left : ∀ {hˡ hʳ K′} {V : Value K′}
{l : Indexed.Tree lb [ K′ ] hˡ}
{r : Indexed.Tree [ K′ ] ub hʳ}
{b : hˡ ∼ hʳ} →
K < K′ →
K ∈ l →
K ∈ Indexed.node (K′ , V) l r b
right : ∀ {hˡ hʳ K′} {V : Value K′}
{l : Indexed.Tree lb [ K′ ] hˡ}
{r : Indexed.Tree [ K′ ] ub hʳ}
{b : hˡ ∼ hʳ} →
K′ < K →
K ∈ r →
K ∈ Indexed.node (K′ , V) l r b
This is the sort of inductive definition you'd expect: either the key is in this inner node or it's down in one of the subtrees. We could also do without the less-than, greater-than proofs, but this is more convenient - when you want to show that a tree does not contain a particular element, you only have to track the path lookup would take, instead of searching the whole tree.
How to interpret those l and r implicit arguments? Notice that is makes perfect sense: we have a key K and naturally we require that the values contained in l fall between lb and K (it's actually [ K ], since we are using the extended _<_) and the values in r fall between K and ub. The balancing (b : hˡ ∼ hʳ) is there just so we can construct an actual tree node.
Your get function is then very simple:
get : ∀ {h lb ub n} {m : Indexed.Tree lb ub h} → n ∈ m → Value n
get (here V) = V
get (left _ p) = get p
get (right _ p) = get p
Well, I told you that this representation is fairly convenient to work it and I'm going to prove it. One of the properties we'd like _∈_ to have is decidability: that is, we can construct a program that tells us whether an element is in a tree or not:
find : ∀ {h lb ub} n (m : Indexed.Tree lb ub h) → Dec (n ∈ m)
find will return either yes p, where p is a proof that n is inside m (n ∈ m), or no ¬p, where ¬p is refutation of n ∈ m, n ∈ m → ⊥. We'll need one lemma:
lem : ∀ {lb ub hˡ hʳ K′ n} {V : Value K′}
{l : Indexed.Tree lb [ K′ ] hˡ}
{r : Indexed.Tree [ K′ ] ub hʳ}
{b : hˡ ∼ hʳ} →
n ∈ Indexed.node (K′ , V) l r b →
(n ≯ K′ → n ≢ K′ → n ∈ l) × (n ≮ K′ → n ≢ K′ → n ∈ r)
lem (here V) =
(λ _ eq → ⊥-elim (eq P.refl)) , (λ _ eq → ⊥-elim (eq P.refl))
lem (left x p) = (λ _ _ → p) , (λ ≮ _ → ⊥-elim (≮ x))
lem (right x p) = (λ ≯ _ → ⊥-elim (≯ x)) , (λ _ _ → p)
This tells us that if we know n is contained in t and we know n is less than the key of the root of t, then n must be in the left subtree (and similarly for right subtree).
Here's the implementation of find function:
find : ∀ {h lb ub} n (m : Indexed.Tree lb ub h) → Dec (n ∈ m)
find n (Indexed.leaf _) = no λ ()
find n (Indexed.node (k , v) l r _) with compare n k
find n (Indexed.node (k , v) l r _) | tri< a ¬b ¬c with find n l
... | yes p = yes (left a p)
... | no ¬p = no λ ¬∈l → ¬p (proj₁ (lem ¬∈l) ¬c ¬b)
find n (Indexed.node (k , v) l r _) | tri≈ ¬a b ¬c
rewrite (P.sym b) = yes (here v)
find n (Indexed.node (k , v) l r _) | tri> ¬a ¬b c with find n r
... | yes p = yes (right c p)
... | no ¬p = no λ ¬∈r → ¬p (proj₂ (lem ¬∈r) ¬a ¬b)
The implementation is fairly straightforward, but I would suggest loading it up in Emacs, trying to replace some of the right-hand-sides with holes and see what the types are. And finally, here are some tests:
open import Data.Nat
open import Data.Nat.Properties
open Membership
(λ _ → ℕ)
(StrictTotalOrder.isStrictTotalOrder strictTotalOrder)
un-tree : Tree → ∃ λ h → Indexed.Tree ⊥⁺ ⊤⁺ h
un-tree (tree t) = , t
test : Indexed.Tree _ _ _
test = proj₂ (un-tree
(insert 5 55 (insert 7 77 (insert 4 44 empty))))
Extract : ∀ {p} {P : Set p} → Dec P → Set _
Extract {P = P} (yes _) = P
Extract {P = P} (no _) = ¬ P
extract : ∀ {p} {P : Set p} (d : Dec P) → Extract d
extract (yes p) = p
extract (no ¬p) = ¬p
∈-test₁ : ¬ (2 ∈ test)
∈-test₁ = extract (find 2 test)
∈-test₂ : 4 ∈ test
∈-test₂ = extract (find 4 test)
I'm trying to prove the following in Coq:
Goal (forall x:X, P(x) /\ Q(x)) -> ((forall x:X, P (x)) /\ (forall x:X, Q (x))).
Can someone please help? I'm not sure whether to split, make an assumption etc.
My apologies for being a complete noob
Goal forall (X : Type) (P Q : X->Prop),
(forall x : X, P x /\ Q x) -> (forall x : X, P x) /\ (forall x : X, Q x).
Proof.
intros X P Q H; split; intro x; apply (H x).
Qed.
Just some hints:
I recommand you use intros to name your hypothesis, split to separate the goals,
and exact to provide the proof terms (which may involve proj1 or proj2).