Section 4.4 of Theorem Proving in Lean shows the following:
example : (∀ x, p x) ↔ ¬ (∃ x, ¬ p x) := sorry
Here I'll focus on the right-to-left case:
¬ (∃ x, ¬ p x) → (∀ x, p x)
Approach 1
We know that we'll have a parameter of type ¬ (∃ x, ¬ p x) so let's start with that:
example : ¬ (∃ x, ¬ p x) → (∀ x, p x) :=
(assume hnExnpx : ¬ (∃ x, ¬ p x),
_)
We know that we'll have to return an expression of type (∀ x, p x) so let's begin constructing that:
example : ¬ (∃ x, ¬ p x) → (∀ x, p x) :=
(assume hnExnpx : ¬ (∃ x, ¬ p x),
(λ x, _))
At this point we need to return a value of type p x and it isn't clear that we can build one at this point. Perhaps we need to try to come up with a false value.
The function hnExnpx returns false given a (∃ x, ¬ p x). So let's try to build one of those, apply hnExnpx to it and then use false.elim:
example : ¬ (∃ x, ¬ p x) → (∀ x, p x) :=
(assume hnExnpx : ¬ (∃ x, ¬ p x),
(λ x,
false.elim (hnExnpx (exists.intro x (λ hpx, _)))
))
And now we're back to needing another false value.
Approach 2
Chapter 3 mentioned that sometimes classical logic is needed.
A naive approach which uses (em (∀ x, p x)) leads us here:
example : ¬ (∃ x, ¬ p x) → (∀ x, p x) :=
(assume hnExnpx : ¬ (∃ x, ¬ p x),
or.elim (em (∀ x, p x))
(λ hAxpx, hAxpx)
(λ hnAxpx : ¬ (∀ x, p x), (λ x, _))
)
Again, we'll need a p x value or a false. The only new thing we have is hnAxpx : ¬∀ (x : α), p x. It's still not clear how to get a p x. hnAxpx does return false but we'd need a (∀ x, p x) which is the original thing we're looking for. :-)
Perhaps this one involves a more elaborate use of classical logic?
Any suggestions are welcome!
UPDATE
Here's an approach based on the comment by simon1505475 below which appears to work:
example : ¬ (∃ x, ¬ p x) → (∀ x, p x) :=
(assume hnExnpx : ¬ (∃ x, ¬ p x),
(λ x,
or.elim (em (p x))
(λ hpx : p x, hpx)
(λ hnpx : ¬ p x,
false.elim (hnExnpx (exists.intro x (λ hpx : p x, hnpx hpx))))))
Related
I'm fairly new to Lean, so apologies if this is obvious. I'm trying to learn Lean and category theory, by doing some category theory exercises in Lean. I have these definitions for arrows and categories:
variable {α : Type u}
inductive Arrow : α → α → Type u
| Id (x : α) : Arrow x x
| Comp (g : Arrow b c) (f : Arrow a b) : Arrow a c
notation a " -→ " b => Arrow a b
notation a " ∘ " b => Arrow.Comp a b
structure Category :=
(assoc {a b c d : α} : ∀ (f : a -→ b) (g : b -→ c) (k : c -→ d),
(k ∘ (g ∘ f)) = ((k ∘ g) ∘ f))
(unitl {a b : α} : ∀ (f : a -→ b), ((Arrow.Id b) ∘ f) = f)
(unitr {a b : α} : ∀ (f : a -→ b), (f ∘ (Arrow.Id a)) = f)
This all compiles fine, so I try to define a discrete category as follows:
def IsDiscrete :=
∀ (x y : α) (f : Arrow x y), x = y ∧ f = Arrow.Id x
The intent is to express "all arrows are identity arrows", but the compiler complains that f has type "Arrow x y", not "Arrow x x". Of course, the whole point is that if arrow f exists, then x = y, so the comparison between f and Id x is sensible. How do I express this in Lean?
Alternatively, is there a better way to express arrows and/or categories in Lean? If so, why is that way better?
I am studying Haskell and I am learning what is an abstraction, substitution (beta equivalence), application, free and bound variables (alpha equivalence), but I have some doubts resolving these exercises, I don't know if my solutions are correct.
Make the following substitutions
1. (λ x → y x x) [x:= f z]
Sol. (\x -> y x x) =>α (\w -> y w w) =>α (\w -> x w w) =>β (\w -> f z w w)
2. ((λ x → y x x) x) [y:= x]
Sol. ((\x -> y x x)x) =>α (\w -> y w w)[y:= x] = (\w -> x w w)
3. ((λ x → y x) (λ y → y x) y) [x:= f y]
Sol. aproximation, i don't know how to do it: ((\x -> y x)(\y -> y x) y) =>β
(\x -> y x)y x)[x:= f y] =>β y x [x:= f y] = y f y
4. ((λ x → λ y → y x x) y) [y:= f z]
Sol aproximation, ((\x -> (\y -> (y x x))) y) =>β ((\y -> (y x x)) y) =>α ((\y -> (y x x)) f z)
Another doubt that I have is if can I run these expressions on this website? It is a Lambda Calculus Calculator but I do not know how to run these tests.
1. (λ x → y x x) [x:= f z]
Sol. (\x -> y x x) =>α (\w -> y w w) =>α (\w -> x w w) =>β (\w -> f z w w)
No, you can't rename y, it's free in (λ x → y x x). Only bound variables can be (consistently) α-renamed. But only free variables can be substituted, and there's no free x in that lambda term.
2. ((λ x → y x x) x) [y:= x]
Sol. ((\x -> y x x)x) =>α (\w -> y w w)[y:= x] = (\w -> x w w)
Yes, substituting x for y would allow it to be captured by the λ x, so you indeed must α-rename the x in (λ x → y x x) first to some new unique name as you did, but you've dropped the application to the free x for some reason. You can't just omit parts of a term, so it's ((\w -> y w w) x)[y:= x]. Now perform the substitution. Note you're not asked to perform the β-reduction of the resulting term, just the substitution.
I'll leave the other two out. Just follow the rules carefully. It's easy if you rename all bound names to unique names first, even if the renaming is not strictly required, for instance
((λ x → y x) (λ y → y x) y) [x:= f y] -->
((λ w → y w) (λ z → z x) y) [x:= f y]
The "unique" part includes also the free variables used in the substitution terms, that might get captured after being substituted otherwise (i.e. without the renaming being performed first, in the terms in which they are being substituted). That's why we had to rename the bound y in the above term, -- because y appears free in the substitution term. We didn't have to rename the bound x but it made it easier that way.
Section 4.4 of Theorem Proving in Lean shows the following:
(∃ x, p x) ↔ ¬ (∀ x, ¬ p x) := sorry
Here I'll focus on the right-to-left case:
¬ (∀ x, ¬ p x) → (∃ x, p x)
We know we'll have a parameter of type ¬ (∀ x, ¬ p x) so let's start with that:
example : ¬ (∀ x, ¬ p x) → (∃ x, p x) :=
(assume hnAxnpx : ¬ (∀ x, ¬ p x),
_
)
At this point, if we try to use exists.intro with x:
example : ¬ (∀ x, ¬ p x) → (∃ x, p x) :=
(assume hnAxnpx : ¬ (∀ x, ¬ p x),
exists.intro x _)
we'll get an error because x is an unknown identifier.
However, as part of this set of exercises, we are given:
variables a : α
so let's try to use exists.intro with that:
example : ¬ (∀ x, ¬ p x) → (∃ x, p x) :=
(assume hnAxnpx : ¬ (∀ x, ¬ p x),
exists.intro a _)
We don't have a p a so let's try to use classical logic via em:
The first clause of or.elim is handled in a straightforward manner:
example : ¬ (∀ x, ¬ p x) → (∃ x, p x) :=
(assume hnAxnpx : ¬ (∀ x, ¬ p x),
or.elim (em (p a))
(λ hpa : p a, exists.intro a hpa)
(λ hnpa : ¬ p a, _))
Since we have a function which goes from (∀ x, ¬ p x) to false (i.e. hnAxnpx), let's try to build a value of type (∀ x, ¬ p x). Then we can call hnAxnpx on it and use false.elim.
example : ¬ (∀ x, ¬ p x) → (∃ x, p x) :=
(assume hnAxnpx : ¬ (∀ x, ¬ p x),
or.elim (em (p a))
(λ hpa : p a, exists.intro a hpa)
(λ hnpa : ¬ p a, false.elim (hnAxnpx (λ a, _))))
At this point it seems we're so close! we're told we need a ¬p a where the underscore is. And it seems that hnpa is just such a value. So let's try that:
example : ¬ (∀ x, ¬ p x) → (∃ x, p x) :=
(assume hnAxnpx : ¬ (∀ x, ¬ p x),
or.elim (em (p a))
(λ hpa : p a, exists.intro a hpa)
(λ hnpa : ¬ p a, false.elim (hnAxnpx (λ a, hnpa))))
However, with that we get the following error:
type mismatch at application
hnAxnpx (λ (a : α), hnpa)
term
λ (a : α), hnpa
has type
α → ¬p a
but is expected to have type
∀ (x : α), ¬p x
Should I continue with the approach illustrated above? Or should an entirely different approach be considered?
UPDATE
Here's another approach:
example : ¬ (∀ x, ¬ p x) → (∃ x, p x) :=
(assume hnAxnpx : ¬ (∀ x, ¬ p x),
false.elim (hnAxnpx
(λ x,
(λ hpx : p x, _)))) -- Here we'll need a false.
However, at that point, it isn't clear to me how to get a false value at the point of the underscore.
UPDATE
Mario Carneiro on the leanprover zulip group provided the following very helpful hints in regards to this last approach:
Don't use false.elim where you did; that's throwing away the
information that we are trying to prove (∃ x, p x) and leads to an
unsolvable state
Use by_contradiction there instead, which gives you additionally the
assumption that ¬ (∃ x, p x)
Going with this advice we get:
example : ¬ (∀ x, ¬ p x) → (∃ x, p x) :=
(assume hnAxnpx : ¬ (∀ x, ¬ p x),
by_contradiction
(λ hnExpx : ¬ (∃ x, p x),
(hnAxnpx
(λ x, (λ hpx,
hnExpx (exists.intro x hpx))))))
which appears to work.
I am working with two binary relations: g_o and pw_o, and I've defined pw_o below:
constants {A : Type} (g_o : A → A → Prop)
def pw_o (x y : A) : Prop := ∀ w : A, (g_o w x → g_o w y) ∧ (g_o y w → g_o x w)
I need to prove the following:
theorem prelim: ∀ x y z : A, g_o x y ∧ pw_o y z → g_o x z :=
I start with these tactics:
begin
intros,
cases a with h1 h2,
end
And I have this:
x y z : A,
h1 : g_o x y,
h2 : pw_o y z
⊢ g_o x z
Since pw_o is defined with a universal quantifier, I'd like to substitute w with x, then I would have (g_o x y → g_o x z) ∧ (g_o z x → g_o y x). After isolating the first conjunct with the "cases" tactic, I can use modus ponens on that first conjunct and h1.
How can I tell Lean to replace w in the definition of pw_o with x and replace x and y in the definition of pw_o with y and z, respectively?
Elimination of the universal quantifier is basically just application,
so to substitute w, with x just apply x to the instance h2 of pw_o y z.
theorem prelim': ∀ x y z : A, g_o x y ∧ pw_o y z → g_o x z :=
begin
intros,
cases a with h1 h2,
cases h2 x with h3 _,
sorry
end
An n-tuple on the lambda calculus is usually defined as:
1-tuple: λ a t . t a
1-tuple-fst: λ t . t (λ a . a)
2-tuple: λ a b t . t a b
2-tuple-fst: λ t . t (λ a b . a)
2-tuple-snd: λ t . t (λ a b . b)
3-tuple: λ a b c t . t a b c
3-tuple-fst: λ t . t (λ a b c . a)
3-tuple-snd: λ t . t (λ a b c . b)
3-tuple-trd: λ t . t (λ a b c . c)
... and so on.
My question is: is it possible to implement a function that receives a church number N and returns the corresponding N-tuple for any N? Also, is it possible to extend this function so it also returns the corresponding accessors? The algorithm can't use any form of recursion, including fixed-point combinators.
~
Edit: as requested, elaborating on what I've tried.
I want that function not to depend on recursion / fixed point combinators, so, the obvious way to do it would be using church-numbers for repetition. Said that, I have tried randomly testing many expressions, in order to learn how they grow. For example:
church_4 (λ a b c . a (b c))
Reduces to:
(λ a b c d e f . a ((((e d) c) b) a)))))
I've compared the reduction of many similar combinations church_4 (λ a b c . (a (b c))) to my desired results and noticed that I could implement the accessors as:
firstOf = (λ max n . (firstOf (sub max n) (firstOf n)))
access = (λ max idx t . (t (firstOf (sub max idx) (firstOf idx))))
Where sub is the subtraction operator and access church_5 church_2 means accessing the 3rd (because 2 is the 3rd natural) element of a 6-tuple.
Now, on the tuples. Notice that the problem is finding a term my_term such that, for example:
church_3 my_term
had the following normal form:
(λ a b c d t . ((((t a) b) c) d))
As you can see, I've almost found it, since:
church_3 (λ a b c . a (b c)) (λ a . a)
Reduces to:
(λ a b c d . (((a b) c) d))
Which is almost the result I need, except thatt is missing.
That is what I've tried so far.
Let's try to implement the n-ary tuple constructor. I shall also aim for a simple implementation, meaning that I try sticking to the elimination of natural numbers and tuples, and try to avoid using other (Church encoded) data structures.
My strategy is as follows:
Write a well-typed version of the function in a dependent language.
Translate it to untyped lambda calculus.
The reason for this is that I quickly get lost in untyped lambda calculus and I'm bound to make quite a few mistakes along the way, while the dependently typed environment puts me on rails. Also, proof assistants are just great help for writing any kind of code.
Step one
I use Agda. I cheat a bit with type-in-type. It makes Agda inconsistent, but for this problem proper type universes would be a huge pain, and it's very unlikely that we actually run into an inconsistency here anyway.
{-# OPTIONS --type-in-type #-}
open import Data.Nat
open import Data.Vec
We need a notion of n-ary polymorphic functions. We store the argument types in a vector of length n:
NFun : ∀ {n} → Vec Set n → Set → Set
NFun [] r = r
NFun (x ∷ ts) r = x → NFun ts r
-- for example, NFun (Nat ∷ Nat ∷ []) = λ r → Nat → Nat → r
We have the usual Church encoding for tuples. The constructors for n-ary tuples are n-ary functions returning a tuple.
NTup : ∀ {n} → Vec Set n → Set
NTup ts = ∀ {r} → NFun ts r → r
NTupCons : ℕ → Set
NTupCons n = ∀ ts → NFun {n} ts (NTup ts)
We'd like to have a function with type ∀ {n} → NTupCons n. We recurse on the Vec Set n parameter for the tuple constructor. The empty case is simple enough, but the cons case is a bit trickier:
nTupCons : ∀ {n} → NTupCons n
nTupCons [] x = x
nTupCons (t ∷ ts) x = ?
We need a NFun ts (NTup (t ∷ ts)) in place of the question mark. We know that nTupCons ts has type NFun ts (NTup ts), so we need to somehow get the former from the latter. We notice that what we need is just n-ary function composition, or in other words a functorial map over the return type of NFun:
compN : ∀ {n A B} (ts : Vec Set n) → (A → B) → NFun ts A → NFun ts B
compN [] f = f
compN (t ∷ ts) f g x = compN ts f (g x)
Now, we only need to get an NTup (t ∷ ts) from an NTup ts, and since we already have x with type t in scope, that's pretty easy:
nTupCons : ∀ {n} → NTupCons n
nTupCons [] x = x
nTupCons (t ∷ ts) x = compN ts consTup (nTupCons ts)
where
consTup : NTup ts → NTup (t ∷ ts)
consTup tup con = tup (con x)
Step two
We shall get rid of the Vec Set n-s and rewrite the functions so they iterate on the n parameters. However, simple iteration is not good for nTupCons, since that only provides us the recursive result (nTupCons ts), but we also need the current n index for compN (since we implement compN by iterating on n). So we write a helper that's a bit like a paramorphism. We also need Church encoded pairs here to pass up the Nat-s through the iteration:
zero = λ z s. z
suc = λ n z s. s (n z s)
fst = λ p. p (λ a b. a)
snd = λ p. p (λ a b. b)
-- Simple iteration has type
-- ∀ {A} → A → (A → A) → Nat → A
-- In contrast, we may imagine rec-with-n having the following type
-- ∀ {A} → A → (A → Nat → A) → Nat → A
-- We also pass the Nat index of the hypothesis to the "cons" case
rec-with-n = λ z f n .
fst (
n
(λ p. p z zero)
(λ hyp p. p (f (fst hyp) (snd hyp)) (suc (snd hyp))))
-- Note: I use "hyp" for "hypothesis".
The rest is straightforward to translate:
compN = λ n. n (λ f. f) (λ hyp f g x. hyp f (g x))
nTupCon =
rec-with-n
(λ x. x)
(λ hyp n. λ x. compN n (λ f g. f (g x)) hyp)
Let's test it for simple cases:
nTupCon zero =
(λ t. t)
nTupCon (suc zero) =
(λ hyp n. λ x. compN n (λ f g. f (g x)) hyp) (nTupCon zero) zero =
λ x. compN zero (λ f g. f (g x)) (λ t. t) =
λ x. (λ f g. f (g x)) (λ t. t) =
λ x. λ g. (λ t. t) (g x) =
λ x . λ g. g x =
λ x g . g x
nTupCon (suc (suc zero)) =
(λ hyp n. λ x. compN n (λ f g. f (g x)) hyp) (nTupCon (suc zero)) (suc zero) =
λ x. compN (suc zero) (λ f g. f (g x)) (λ a t. t a) =
λ x a. (λ f g. f (g x)) ((λ y t. t y) a) =
λ x a. (λ f g. f (g x)) (λ t. t a) =
λ x a g. (λ t. t a) (g x) =
λ x a g. g x a
It seems to work.
Let
foldargs = λ t n f z . (IsZero n) (t z) (λ a . foldargs t (pred n) f (f a z))
Then function
listofargs = λ n . foldargs id n pair null
returns reversed list of its args:
listofargs 5 a b c d e --> (e . (d . (c . (b . (a . null))))) or [e d c b a]
Function
apply = λ f l . (isnil l) f (apply (f (head l)) (tail l))
applies first argument (n-ary function) to arguments taken from the second argument (a list of length n):
apply f [a b c d e] --> f a b c d e
The rest is easy:
n-tuple = λ n . foldargs n-tuple' (Succ n) pair null
where
n-tuple' = λ l . apply (head l) (reverse (tail l))
Implementation of the other functions can be taken from wikipedia.
Recursion can be eliminated by Y-combinator.
reverse is simple.
UPD: Nonrecursive versions of the functions:
foldargs = Y (λ c t n f z . (IsZero n) (t z) (λ a . c t (pred n) f (f a z)))
apply = Y (λ c f l . (isnil l) f (c (f (head l)) (tail l)))
Y = λ f (λ x . f x x) (λ x . f x x)


I found it! There you go:
nTup = (λ n . (n (λ f t k . (f (λ e . (t (e k))))) (λ x . x) (λ x . x)))
Testing:
nTup n1 → (λ (λ (0 1)))
nTup n2 → (λ (λ (λ ((0 1) 2))))
nTup n3 → (λ (λ (λ (λ (((0 1) 2) 3)))))
nTup n4 → (λ (λ (λ (λ (λ ((((0 1) 2) 3) 4))))))
And so on. It stores the elements backwards but I don't think I'm gonna fix that - it looks more natural like so. The challenge was getting that 0 on the leftmost innermost paren. As I said, I could easily get both (0 (1 (2 (3 4)))) and ((((4 3) 2) 1) 0), but those don't work as tuples because that 0 is what holds the elements there.
Thank you all!
Edit: I've actually settled with this one:
nTup = (λ a . (a (λ b c d . (b (λ b . (c b d)))) (λ x . x) (λ x . x)))
Which preserves the correct order.
nTup n4 → (λ (λ (λ (λ (λ ((((0 4) 3) 2) 1))))))
If you can construct the n-tuples, you can easily access the ith index.
First, we need a type for the otherwise infinite untyped lambda functions. The extra X constructor allows us to inspect these functions by executing them.
import Prelude hiding (succ, pred)
data Value x = X x | F (Value x -> Value x)
instance (Show x) => Show (Value x) where
show (X x) = "X " ++ show x
show _ = "F"
It's convenient to be able to apply functions to each other.
ap :: Value x -> Value x -> Value x
ap (F f) = f
ap _ = error "Attempt to apply Value"
infixl 1 `ap`
If you are going to encode numbers with Church numerals you need some church numerals. We will also need subtraction to figure out how many additional arguments to skip when indexing into an n tuple.
idF = F $ \x -> x
zero = F $ \f -> idF
succ = F $ \n -> F $ \f -> F $ \x -> f `ap` (n `ap` f `ap` x)
one = succ `ap` zero
two = succ `ap` one
three = succ `ap` two
four = succ `ap` three
pred = F $ \n -> F $ \f -> F $ \x -> n `ap` (F $ \g -> F $ \h -> h `ap` (g `ap` f)) `ap` (F $ \u -> x) `ap` idF
subtractF = F $ \n -> (n `ap` pred)
The constant function drops its first argument. If we iterate the constant function some numeral number of times, it drops that many first arguments.
--drops the first argument
constF = F $ \f -> F $ \x -> f
-- drops i first arguments
constN = F $ \i -> i `ap` constF
We can make another constant function that drops its second argument. If we iterate it some numeral number of times, it drops that many second arguments.
-- drops the second argument
constF' = F $ \f -> F $ \a -> F $ \b -> f `ap` a
-- drops n second arguments
constN' = F $ \n -> n `ap` constF'
To index into an n tuple's ith index (starting at zero for the first index), we need to drop n-i-1 arguments off the end and drop i arguments off the start.
-- drops (n-i-1) last arguments and i first arguments
access = F $ \n -> F $ \i -> constN `ap` i `ap` (constN' `ap` (subtractF `ap` (succ `ap` i) `ap` n) `ap` idF)
We'll define few example tuples of fixed size
tuple1 = F $ \a -> F $ \t -> t `ap` a
tuple2 = F $ \a -> F $ \b -> F $ \t -> t `ap` a `ap` b
tuple3 = F $ \a -> F $ \b -> F $ \c -> F $ \t -> t `ap` a `ap` b `ap` c
which we can use to demonstrate that it is possible to generate the corresponding accessors.
main = do
print $ tuple1 `ap` (X "Example") `ap` (access `ap` one `ap` zero)
print $ tuple2 `ap` (X "Hello") `ap` (X "World") `ap` (access `ap` two `ap` zero)
print $ tuple2 `ap` (X "Hello") `ap` (X "World") `ap` (access `ap` two `ap` one)
print $ tuple3 `ap` (X "Goodbye") `ap` (X "Cruel") `ap` (X "World") `ap` (access `ap` three `ap` zero)
print $ tuple3 `ap` (X "Goodbye") `ap` (X "Cruel") `ap` (X "World") `ap` (access `ap` three `ap` one)
print $ tuple3 `ap` (X "Goodbye") `ap` (X "Cruel") `ap` (X "World") `ap` (access `ap` three `ap` two)
Running this outputs
X "Example"
X "Hello"
X "World"
X "Goodbye"
X "Cruel"
X "World"
To construct tuples you will need to iterate some function that adds arguments to a function instead of dropping them.