data Nat = Zero | Succ Nat
type Predicate = (Nat -> Bool)
-- forAllNat p = (p n) for every finite defined n :: Nat
implies :: Bool -> Bool -> Bool
implies p q = (not p) || q
basecase :: Predicate -> Bool
basecase p = p Zero
jump :: Predicate -> Predicate
jump p n = implies (p n) (p (Succ n))
indstep :: Predicate -> Bool
indstep p = forallnat (jump p)
Question:
Prove that if basecase p and indstep p, then forAllNat p
What I do not understand is that if basecase p and indstep p, so forAllNat p should be True, of course.
I think basecase p says that P(0) is true, and
indstep p says that P(Succ n) which is P(n+1) is true
And we need to prove P(n) is true.
Am I right?
Any suggestion about how to do this?
As Benjamin Hodgson indicates, you can't quite prove that in Haskell. However, you can prove a statement with slightly stronger preconditions. I'll also ignore the unnecessary complexity of Bool.
{-# LANGUAGE GADTs, KindSignatures, DataKinds, RankNTypes, ScopedTypeVariables #-}
data Nat = Z | S Nat
data Natty :: Nat -> * where
Zy :: Natty 'Z
Sy :: Natty n -> Natty ('S n)
type Base (p :: Nat -> *) = p 'Z
type Step (p :: Nat -> *) = forall (n :: Nat) . p n -> p ('S n)
induction :: forall (p :: Nat -> *) (n :: Nat) .
Base p -> Step p -> Natty n -> p n
induction b _ Zy = b
induction b s (Sy n) = s (induction b s n)
You can't prove this within Haskell. (Turns out you can.) The language is not dependently typed enough. It's a programming language, not a proof assistant. I think the assignment probably expects you to prove it on pencil and paper.
You can do it in Agda though.
data Nat : Set where
zero : Nat
suc : Nat -> Nat
Pred : Set -> Set1
Pred A = A -> Set
Universal : {A : Set} -> Pred A -> Set
Universal {A} P = (x : A) -> P x
Base : Pred Nat -> Set
Base P = P zero
Step : Pred Nat -> Set
Step P = (n : Nat) -> P n -> P (suc n)
induction-principle : (P : Pred Nat) -> Base P -> Step P -> Universal P
induction-principle P b s zero = b
induction-principle P b s (suc n) = s n (induction-principle P b s n)
(You may recognise induction-principle as being Nat's foldr.)
You may be able to get something a bit like this when TypeInType lands in GHC 8. It won't be pretty though.
Related
I'm having a bit of trouble understanding the difference between strong and weak specification in Coq. For instance, if I wanted to write the replicate function (given a number n and a value x, it creates a list of length n, with all elements equal to x) using the strong specification way, how would I be able to do that? Apparently I have to write an Inductive "version" of the function but how?
Definition in Haskell:
myReplicate :: Int -> a -> [a]
myReplicate 0 _ = []
myReplicate n x | n > 0 = x:myReplicate (n-1) x
| otherwise = []
Definition of weak specification:
To define these functions with a weak specification and then add companion lemmas.
For instance, we define a function f : A->B and we prove a statement of the form ∀ x:A, Rx (fx), where R is a relation coding the intended input/output behaviour of the function.
Definition of strong specification:
To give a strong specification of the function: the type of this function directly states that the input is a value x of type A and that the output is the combination of a value v of type B and a proof that v satisfies Rxv.
This kind of specification usually relies on dependent types.
EDIT: I heard back from my teacher and apparently I have to do something similar to this, but for the replicate case:
"For example, if we want to extract a function that computes the length of a list from its specification, we can define a relation RelLength which establishes a relation between the expected input and output and then prove it. Like this:
Inductive RelLength (A:Type) : nat -> list A -> Prop :=
| len_nil : RelLength 0 nil
| len_cons : forall l x n, RelLength n l -> RelLength (S n) (x::l) .
Theorem len_corr : forall (A:Type) (l:list A), {n | RelLength n l}.
Proof.
…
Qed.
Recursive Extraction len_corr.
The function used to prove must use the list “recursor” directly (that’s why fixpoint won’t show up - it’s hidden in list_rect).
So you don’t need to write the function itself, only the relation, because the function will be defined by the proof."
Knowing this, how can I apply it to the replicate function case?
Just for fun, here's what it would look like in Haskell, where everything dependent-ish is much more annoying. This code uses some very new GHC features, mostly to make the types more explicit, but it could be modified quite easily to work with older GHC versions.
{-# language GADTs, TypeFamilies, PolyKinds, DataKinds, ScopedTypeVariables,
TypeOperators, TypeApplications, StandaloneKindSignatures #-}
{-# OPTIONS_GHC -Wincomplete-patterns #-}
module RelRepl where
import Data.Kind (Type)
import Data.Type.Equality ((:~:)(..))
-- | Singletons (borrowed from the `singletons` package).
type Sing :: forall (k :: Type). k -> Type
type family Sing
type instance Sing #Nat = SNat
type instance Sing #[a] = SList #a
-- The version of Sing in the singletons package has many more instances;
-- in any case, more can be added anywhere as needed.
-- Natural numbers, used at the type level
data Nat = Z | S Nat
-- Singleton representations of natural numbers, used
-- at the term level.
data SNat :: Nat -> Type where
SZ :: SNat 'Z
SS :: SNat n -> SNat ('S n)
-- Singleton lists
data SList :: forall (a :: Type). [a] -> Type where
SNil :: SList '[]
SCons :: Sing a -> SList as -> SList (a ': as)
-- The relation representing the `replicate` function.
data RelRepl :: forall (a :: Type). Nat -> a -> [a] -> Type where
Repl_Z :: forall x. RelRepl 'Z x '[]
Repl_S :: forall n x l. RelRepl n x l -> RelRepl ('S n) x (x ': l)
-- Dependent pairs, because those aren't natively supported.
data DPair :: forall (a :: Type). (a -> Type) -> Type where
MkDPair :: forall {a :: Type} (x :: a) (p :: a -> Type).
Sing x -> p x -> DPair #a p
-- Proof that every natural number and value produce a list
-- satisfying the relation.
repl_corr :: forall {a :: Type} (n :: Nat) (x :: a).
SNat n -> Sing x -> DPair #[a] (RelRepl n x)
repl_corr SZ _x = MkDPair SNil Repl_Z
repl_corr (SS n) x
| MkDPair l pf <- repl_corr n x
= MkDPair (SCons x l) (Repl_S pf)
-- Here's a proof that the relation indeed specifies
-- a *unique* function.
replUnique :: forall {a :: Type} (n :: Nat) (x :: a) (xs :: [a]) (ys :: [a]).
RelRepl n x xs -> RelRepl n x ys -> xs :~: ys
replUnique Repl_Z Repl_Z = Refl
replUnique (Repl_S pf1) (Repl_S pf2)
| Refl <- replUnique pf1 pf2
= Refl
A possible specification would look like this :
Inductive RelReplicate (A : Type) (a : A) : nat -> (list A) -> Prop :=
| rep0 : RelReplicate A a 0 nil
| repS : …
I did the zero case, leaving you the successor case. Its conclusion should be something like RelReplicate A a (S n) (a :: l).
As in your example, you can then try and prove something like
Theorem replicate_corr : forall (A:Type) (a : A) (n : nat), {l | ReplicateRel A a n l}.
which should be easy by induction on n.
If you want to check that your function replicate_corr corresponds to what you had in mind, you can try it on a few examples, with
Eval compute in (proj1_sig (rep_corr nat 0 3)).
which evaluates the first argument (the one corresponding to the "real function" and not the proof) of rep_corr. To be able to do that, you should end your Theorem with Defined rather than Qed so that Coq can evaluate it.
In Haskell, if one enables the RankNTypes extension
{-# Language RankNTypes #-}
then one can define the natural numbers as they are encoded in System-F:
type Nat = forall a. a -> ((a -> a) -> a)
zero :: Nat
zero = \z s -> z
succ :: Nat -> Nat
succ n = \z s -> s (n z s)
fold :: a -> (a -> a) -> Nat -> a
fold z s n = n z s
Yay! The next step is to define the case operation: the idea is that
caseN :: Nat -> a -> (Nat -> a) -> a
caseN n z f = "case n of
zero -> z
succ m -> f m"
Of course that's not directly possible. One thing that is possible is to define the natural numbers as normally {data Nats = Zero | Succ Nats} and define "conversions" between Nat and Nats, and then use the syntactic case construct built-in to Haskell.
In the untyped lambda calculus, caseN can be written as
caseN n b f = snd (fold (zero, b) (\(n0, _) -> (succ n0, f n0)) n)
following a trick apparently discovered by Kleene for defining the predecessor function. This version of caseN does look like it should typecheck with the type given above. (zero, b) :: (Nat, b) and \(n0, _) -> (succ n0, f n0) :: (Nat, b) -> (Nat, b), so fold (zero, b) (\(n0, _) -> (succ n0, f n0)) n :: (Nat, b).
However this doesn't typecheck in Haskell. Trying to isolate the inner function \(n0, _) -> (succ n0, f n0) with
succf :: (Nat -> b) -> (Nat, b) -> (Nat, b)
succf f (n, _y) = (succ n, f n)
reveals that the ImpredicativeTypes extension may be needed, as succf seems to require that extension. For the more typical {data Nats = Zero | Succ Nats}, the caseN construct does work (after changing to the appropriate fold, and Zero, Succ).
Is it possible to get caseN to work on Nat directly? Is a different trick needed?
I think the typical trick is to use a data type (or newtype, as pointed out by a commenter) wrapper. To start, instead of defining Nat as a type synonym, you can define it as:
newtype Nat = Nat { unNat :: forall a. a -> ((a -> a) -> a) }
This is isomorphic to your definition, except that you must explicitly wrap and unwrap the contents.
We can continue by writing the same definitions you had:
zero :: Nat
zero = Nat $ \z s -> z
succ :: Nat -> Nat
succ (Nat n) = Nat $ \z s -> s (n z s)
fold :: a -> (a -> a) -> Nat -> a
fold z s (Nat n) = n z s
This is basically what you already had, but now with explicit wrapping and unwrapping using Nat (as both constructor and pattern).
At this point, your final definitions just work:
caseN :: Nat -> b -> (Nat -> b) -> b
caseN n b f = snd (fold (zero,b) (\(n0,_) -> (succ n0,f n0)) n)
succf :: (Nat -> b) -> (Nat, b) -> (Nat, b)
succf f (n,_y) = (succ n, f n)
In Coq I can define a Church encoding for lists of length n:
Definition listn (A : Type) : nat -> Type :=
fun m => forall (X : nat -> Type), X 0 -> (forall m, A -> X m -> X (S m)) -> X m.
Definition niln (A : Type) : listn A 0 :=
fun X n c => n.
Definition consn (A : Type) (m : nat) (a : A) (l : listn A m) : listn A (S m) :=
fun X n c => c m a (l X n c).
Is the type system of Haskell (including its extensions) strong enough to accommodate such definitions? If yes, how?
Sure it is:
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE GADTs #-}
import Data.Kind -- Needed for `Type`
data Nat = Z | S Nat -- Roll your own...
type List (a :: Type) (n :: Nat) =
forall (x :: Nat -> Type). x Z -> (forall (m :: Nat). a -> x m -> x (S m)) -> x n
niln :: List a Z
niln = \z _ -> z
consn :: a -> List a n -> List a (S n)
consn a l = \n c -> c a (l n c)
Further proof (for skeptics) of the isomorphism with the usual GADT formulation:
data List' (a :: Type) (n :: Nat) where
Nil :: List' a Z
Cons :: a -> List' a m -> List' a (S m)
to :: List' a n -> List a n
to Nil = niln
to (Cons a l) = consn a (to l)
from :: List a n -> List' a n
from l = l Nil Cons
I am experimenting with Haskell's type system and want to write a type safe addition function. This function should accept two singleton witnesses representing numbers and returns a singleton witness of a number whose type carries the proof that it is indeed a sum of the arguments. Here is the code:
{-# language TypeFamilies, KindSignatures, DataKinds, PolyKinds, UndecidableInstances, GADTs #-}
data Nat = Zero | Succ Nat deriving Show
type family Add (m :: Nat) (n :: Nat) :: Nat where
Add Zero n = n
Add (Succ m) n = Add m (Succ n)
data SNat :: Nat -> * where
Zy :: SNat Zero
Suc :: SNat m -> SNat (Succ m)
data Bounded' m = B m
sum' :: Bounded' (SNat m) -> Bounded' (SNat n) -> Bounded' (SNat (Add m n))
sum' (B m) (B n) = B $ case (m, n) of
(Zy,x) -> x
(Suc x, y) -> let B z = sum' (B x) (B y) in Suc z
Here is the error:
• Could not deduce: Add m1 ('Succ n) ~ 'Succ (Add m1 n)
from the context: m ~ 'Succ m1
bound by a pattern with constructor:
Suc :: forall (m :: Nat). SNat m -> SNat ('Succ m),
in a case alternative
at main.hs:17:22-26
Expected type: SNat (Add m n)
Actual type: SNat ('Succ (Add m1 n))
• In the expression: Suc z
In the expression: let B z = sum' (B x) (B y) in Suc z
In a case alternative:
(Suc x, y) -> let B z = sum' (B x) (B y) in Suc z
I understand the error message. How do I provide GHC with the necessary proof that Add m n = Succ (Add k n) in expression Suc z when it learns that m ~ Succ k (in second case match) and are there alternative approaches to doing so. Thank you.
Your definition of addition is not the conventional one.
type family Add (m :: Nat) (n :: Nat) :: Nat where
Add Zero n = n
Add (Succ m) n = Add m (Succ n)
This is a "tail recursive" addition. It sure seems like there should be a way to prove your properties using this form of addition, but I can't figure it out. Until then, tail recursion at the type/property level tends to be a lot more difficult to work with than the standard kind:
type family Add (m :: Nat) (n :: Nat) :: Nat where
Add Zero n = n
Add (Succ m) n = Succ (Add m n)
This latter definition of addition makes your sum' pass without any convincing at all.
EDIT actually it was easy once I saw it right. Here's what I got (importing Data.Type.Equality and enabling LANGUAGE TypeOperators):
propSucc2 :: SNat m -> SNat n -> Add m (Succ n) :~: Succ (Add m n)
propSucc2 Zy _ = Refl
propSucc2 (Suc m) n = propSucc2 m (Suc n)
Tail-recursive definition, tail-recursive proof. Then to use it, you use gcastWith:
sum' (B m) (B n) = ...
(Suc x, y) -> gcastWith (propSucc2 x y)
(let B z = sum' (B x) (B y) in Suc z)
gcastWith just takes a :~: equality and makes it available to the type checker within the scope of its second argument.
By the way, if you define sum' in a parallel structure to your Add type family, then you don't need any lemmas. Getting things to follow parallel structures is a good technique to keep things easy (this is part of the art of dependent programming, since it's not always obvious how):
sum' :: Bounded' (SNat m) -> Bounded' (SNat n) -> Bounded' (SNat (Add m n))
sum' (B Zy) (B n) = B n
sum' (B (Suc m)) (B n) = sum' (B m) (B (Suc n))
Expressing normal data types such as lists and nats is straightforward and there are many examples around. What is the generic procedure to translate GADTs, though? Some examples translating typical types such as Vector and dependent products from Idris to Morte would be very illustrative.
You can't get eliminators that depend on elements of data types, but you can define eliminators that depend on indices of elements of data types. Hence, Vectors are representable (the code is in Agda):
Nat = (P : Set) -> (P -> P) -> P -> P
zero : Nat
zero = λ P f z -> z
suc : Nat -> Nat
suc = λ n P f z -> f (n P f z)
plus : Nat -> Nat -> Nat
plus = λ n m P f z -> n P f (m P f z)
Vec = λ (A : Set) (n : Nat) ->
(P : Nat -> Set) -> (∀ n -> A -> P n -> P (suc n)) -> P zero -> P n
nil : ∀ A -> Vec A zero
nil = λ A P f z -> z
cons : ∀ A n -> A -> Vec A n -> Vec A (suc n)
cons = λ A n x xs P f z -> f n x (xs P f z)
concat : ∀ A n m -> Vec A n -> Vec A m -> Vec A (plus n m)
concat = λ A n m xs ys P f z -> xs (λ n -> P (plus n m)) (λ n -> f (plus n m)) (ys P f z)
These are very similar to Church-encoded lists, you just make a type, that you eliminate into, dependent on the indices of a data type being defined and change induction hypotheses to reflect the structure of the constructors of the data type. I.e. you have
cons : ∀ A n -> A -> Vec A n -> Vec A (suc n)
so the corresponding induction hypothesis is
∀ n -> A -> P n -> P (suc n)
In order to define dependent pairs without inductive types, you need very/insanely dependent types (sigmas are here), which allow the result of a function depend on this same function being defined. Morte doesn't have this, of course.
Everything that is representable is documented in the Morte tutorial. GADTs and (more generally) indexed types aren't there, and indeed they aren't possible.
(EDIT: GADTs can be in fact represented; see other answer by user3237465)
The Vector type itself can be encoded, but its values aren't usable for much. A Vector n A is an n-nested pair of A-s:
Unit = \(A : *) -> A -> A
Pair = \(A B : *) -> (P : *) -> (A -> B -> P) -> P
Nat = (N : *) -> (N -> N) -> N -> N
Vector = \(n : Nat)(A : *) -> n * (\(t : *) -> Pair A t) Unit
But writing any useful function for Vector n A would require induction on its n length, but Morte has no inductive types.
To be clear, by induction I mean that for a certain type a function corresponding to the principle of structural induction is derivable. These are generalizations of folds where the output type may depend on the input value. For some natural number type Nat : * with suc : Nat -> Nat and zero : Nat induction has the following type:
natInd :
(N : Nat -> *) -- a predicate,
-> ((n : Nat) -> N n -> N (suc n)) -- if it's preserved by suc
-> N zero -- and holds for zero,
-> (n : Nat) -> N n -- holds for every Nat
While folding over a Vector, the type changes along with the length (since the former depends on the latter). However, with Church Nat we only have non-dependent fold (aka "recursion") instead of possibly type-changing fold (aka "induction").
Yes. As an example, this answer shows how to write the Refl type.
Let's say we want to build a simple DSL. Here's how to do ti:
Expr t = forall (E :: * -> *). forall
(IntLit :: Integer -> E Integer),
(IntVar :: Char -> E Integer),
(Add :: E Integer -> E Integer -> E Integer),
(Mult :: E Integer -> E Integer -> E Integer),
(Neg :: E Integer -> E Integer),
(IntEq :: E Integer -> E Integer -> E Bool),
(Lt :: E Integer -> E Integer -> E Bool),
(And :: E Bool -> E Bool -> E Bool),
(Or :: E Bool -> E Bool -> E Bool),
(Not :: E Bool -> E Bool),
(If :: (forall x :: *. E Bool -> E x -> E x -> E x)).
E t