Is there a way you can provide default implementations of Coq typeclass methods like you can in Haskell? I saw no mention of this in the Coq typeclass documentation. If there does not exist such a feature, is there a common pattern for emulating this behavior?
A default implementation can be viewed as a function which constructs the default methods given other provided methods. So you can just define a function.
Class C a :=
{ m1 : a
; m2 : a
}.
(* Construct an instance of C from an implementation of only m1. *)
Definition mkC {a} (m1_ : a) := {| m1 := m1_ ; m2 := m1_ |}.
#[global]
Instance C_nat : C nat := mkC 0.
Another idea is to break the class into single-method classes. Then you can first define instances for the explicitly implemented methods, and then again use functions to obtain default implementations for other methods. By breaking down the class this way, you don't need to explicitly apply the default function to the provided methods.
Class N1 a :=
n1 : a.
Class N2 a :=
n2 : a.
(* Default implementation of N2 using N1 *)
Definition defaultN2 {a} {_ : N1 a} : N2 a := n1 (a := a).
#[global]
Instance N1_nat : N1 nat := 0.
#[global]
Instance N2_nat : N2 nat := defaultN2. (* N1 nat is passed implicitly here *)
I wish that you could do this, but it is not supported.
Related
I am new to Coq, and was hoping that someone with more experience could help me with a problem I am facing.
I have defined a relation to represent the evaluation of a program in an imaginary programming language. The goal of the language is unify function calls and a constrained subset of macro invocations under a single semantics. Here is the definition of the relation, with its first constructor (I am omitting the rest to save space and avoid unnecessary details).
Inductive EvalExpr:
store -> (* Store, mapping L-values to R-values *)
environment -> (* Local environment, mapping function-local variables names to L-values *)
environment -> (* Global environment, mapping global variables names to L-values *)
function_table ->(* Mapping function names to function definitions *)
macro_table -> (* Mapping macro names to macro definitions *)
expr -> (* The expression to evaluate *)
Z -> (* The value the expression terminates to *)
store -> (* The final state of the program store after evaluation *)
Prop :=
(* Numerals evaluate to their integer representation and do not
change the store *)
| E_Num : forall S E G F M z,
EvalExpr S E G F M (Num z) z S
...
The mappings are defined as follows:
Module Import NatMap := FMapList.Make(OrderedTypeEx.Nat_as_OT).
Module Import StringMap := FMapList.Make(OrderedTypeEx.String_as_OT).
Definition store : Type := NatMap.t Z.
Definition environment : Type := StringMap.t nat.
Definition function_table : Type := StringMap.t function_definition.
Definition macro_table : Type := StringMap.t macro_definition.
I do not think the definitions of the other types are relevant to this question, but I can add them if needed.
Now when trying to prove the following lemma, which seems intuitively obvious, I get stuck:
Lemma S_Equal_EvalExpr_EvalExpr : forall S1 S2,
NatMap.Equal S1 S2 ->
forall E G F M e v S',
EvalExpr S1 E G F M e v S' <-> EvalExpr S2 E G F M e v S'.
Proof.
intros. split.
(* -> *)
- intros. induction H0.
+ (* Num *)
Fail constructor.
Abort.
If I were able to rewrite S2 for S1 in the goal, the proof would be trivial; however, if I try to do this, I get the following error:
H : NatMap.Equal S S2
(* Other premises *)
---------------------
EvalExpr S2 E G F M (Num z) z S
rewrite <- H.
Found no subterm matching "NatMap.find (elt:=Z) ?M2433 S2" in the current goal.
I think this has to do with finite mappings being abstract types, and thus not being rewritable like concrete types are. However, I noticed that I can rewrite mappings within other equations/relations found in Coq.FSets.FMapFacts. How would I tell Coq to let me rewrite mapping types inside my EvalExpr relation?
Update: Here is a gist containing a minimal working example of my problem. The definitions of some of the mapping types have been altered for brevity, but the problem is the same.
The issue here is that the relation NatMap.Equal, which says that two maps have the same bindings, is not the same as the notion of equality in Coq's logic, =. While it is always possible to rewrite with =, rewriting with some other relation R is only possible if you can prove that the property you are trying to show is compatible with it. This is already done for the relations in FMap, which is why rewriting there works.
You have two options:
Replace FMap with an implementation for which the intended map equality coincides with =, a property usually known as extensionality. There are many libraries that provide such data structures, including my own extructures, but also finmap and std++. Then, you never need to worry about a custom equality relation; all the important properties of maps work with =.
Keep FMap, but use the generalized rewriting mechanism to allow rewriting with FMap.Equal. To do this, you probably need to modify the definition of your execution relation so that it is compatible with FMap.Equal. Unfortunately, I believe the only way to do this is by explicitly adding equality hypotheses everywhere, e.g.
Definition EvalExpr' S E G F M e v S' :=
exists S0 S0', NatMap.Equal S S0 /\
NatMap.Equal S' S0' /\
EvalExpr S0 E G F M e v S0'.
Since this will pollute your definitions, I would not recommend this approach.
Arthur's answer explains the problem very well.
One other (?) way to do it could be to modify your Inductive definition of EvalExpr to explicitly use the equality that you care about (NatMap.Equal instead of Eq). You will have to say in each rule that it is enough for two maps to be Equal.
For example:
| E_Num : forall S E G F M z,
EvalExpr S E G F M (Num z) z S
becomes
| E_Num : forall S1 S2 E G F M z,
NatMap.Equal S1 S2 ->
EvalExpr S1 E G F M (Num z) z S2
Then when you want to prove your Lemma and apply the constructor, you will have to provide a proof that S1 and S2 are equal. (you'll have to reason a little using that NatMap.Equal is an equivalence relation).
I've looked but haven't found any mechanism described in the documentation which allows you to describe a section by it's signature. For example, in the section below the syntax of def requires the right hand side (here sorry)
section
variable A : Type
def ident : A → A := sorry
end
Is there anything like a signature which would allow you to forward declare the contents of a section? Such as in the following made up syntax.
signature
variable A : Type
def ident : A → A
end
The closest i've come using actual syntax is the following,
which declares the proofs twice, the second time for keeping the proof on the right hand side as short as possible.
section
variables A B : Type
def ident' {A : Type} : A → A := (λ x, x)
def mp' {A B : Type}: (A → B) → A → B := (λ f, λ x, f x)
/- Signature-/
def ident : A → A := ident'
def mp : (A → B) → A → B := mp'
end
No, forward declarations are not allowed in general. Lean, like most other ITPs, relies on the order of declarations for termination checking. Forward declarations would allow you to introduce arbitrary mutual recursion, which Lean 3 only accepts in a clearly delimited context:
mutual def even, odd
with even : nat → bool
| 0 := tt
| (a+1) := odd a
with odd : nat → bool
| 0 := ff
| (a+1) := even a
(from Theorem Proving in Lean)
I have a type, say
Inductive Tt := a | b | c.
What's the easiest and/or best way to define a subtype of it? Suppose I want the subtype to contain only constructors a and b. A way would be to parametrize on a two-element type, e.g. bool:
Definition filt (x:bool): Tt := match x with
| true => a
| false => b end.
Check filt true: Tt.
This works but is very awkward if your expression has several (possibly interdependent) subtypes defined this way. Besides, it works only half way, as no subtype is defined. For this I must additionally define e.g.
Notation _Tt := ltac: (let T := type of (forall {x:bool}, filt x) in exact T).
Fail Check a: _Tt. (*The term "filt x" has type "Tt" which should be Set, Prop or Type.*)
which in this case also doesn't work. Another way would be to use type classes, e.g.
Class s_Tt: Type := s: Tt.
Instance a':s_Tt := a.
Instance b':s_Tt := b.
Check a: s_Tt.
Check b': Tt.
Check c: s_Tt.
As you see, this doesn't work: c is still in s_Tt (even if type inference should work better w/ instances). Finally, a coercion
Parameter c0:> bool -> Tt.
Notation a_ := true.
Notation b_ := false.
Notation Tt_ := bool.
Check a_: Tt_.
Check b_: Tt.
Fail Check a: Tt_.
works but, of course, a and b cannot be used as terms of the defined subtype (which would be always convenient and sometimes necessary)
I suppose subset types shouldn't be an answer to this question (a term of a subset type is never that of its (proper) superset). Maybe there's a better way of using type classes for this purpose?
So we can have explicit arguments, denoted by ().
We can also have implicit arguments, denoted by {}.
So far so good.
However, why do we also need the [] notation for type classes specifically?
What is the difference between the following two:
theorem foo {x : Type} : ∀s : inhabited x, x := sorry
theorem foo' {x : Type} [s : inhabited x] : x := sorry
Implicit arguments are inserted automatically by Lean's elaborator. The {x : Type} that appears in both of your definitions is one example of an implicit argument: if you have s : inhabited nat, then you can write foo s, which will elaborate to a term of type nat, because the x argument can be inferred from s.
Type class arguments are another kind of implicit argument. Rather than being inferred from later arguments, the elaborator runs a procedure called type class resolution that will attempt to generate a term of the designated type. (See chapter 10 of https://leanprover.github.io/theorem_proving_in_lean/theorem_proving_in_lean.pdf.) So, your foo' will actually take no arguments at all. If the expected type x can be inferred from context, Lean will look for an instance of inhabited x and insert it:
def foo' {x : Type} [s : inhabited x] : x := default x
instance inh_nat : inhabited nat := ⟨3⟩
#eval (2 : ℕ) + foo' -- 5
Here, Lean infers that x must be nat, and finds and inserts the instance of inhabited nat, so that foo' alone elaborates to a term of type nat.
Suppose I have a code with a lot of modules and sections. In some of them there are polymorphic definitions.
Module MyModule.
Section MyDefs.
(* Implicit. *)
Context {T: Type}.
Inductive myIndType: Type :=
| C : T -> myIndType.
End MyDefs.
End MyModule.
Module AnotherModule.
Section AnotherSection.
Context {T: Type}.
Variable P: Type -> Prop.
(* ↓↓ ↓↓ - It's pretty annoying. *)
Lemma lemma: P (#myIndType T).
End AnotherSection.
End AnotherModule.
Usually Coq can infer the type, but often I still get typing error. In such cases, you have to explicitly specify the implicit type with #, which spoils the readability.
Cannot infer the implicit parameter _ of _ whose type is "Type".
Is there a way to avoid this? Is it possible to specify something like default parameters, which will be substituted every time Coq cannot guess a type?
You can use a typeclass to implement this notion of default value:
Class Default (A : Type) (a : A) :=
withDefault { value : A }.
Arguments withDefault {_} {_}.
Arguments value {_} {_}.
Instance default (A : Type) (a : A) : Default A a :=
withDefault a.
Definition myNat `{dft : Default nat 3} : nat :=
value dft.
Eval cbv in myNat.
(* = 3 : nat *)
Eval cbv in (#myNat (withDefault 5)).
(* = 5 : nat *)