Coq: set default implicit parameters - implicit

Suppose I have a code with a lot of modules and sections. In some of them there are polymorphic definitions.
Module MyModule.
Section MyDefs.
(* Implicit. *)
Context {T: Type}.
Inductive myIndType: Type :=
| C : T -> myIndType.
End MyDefs.
End MyModule.
Module AnotherModule.
Section AnotherSection.
Context {T: Type}.
Variable P: Type -> Prop.
(* ↓↓ ↓↓ - It's pretty annoying. *)
Lemma lemma: P (#myIndType T).
End AnotherSection.
End AnotherModule.
Usually Coq can infer the type, but often I still get typing error. In such cases, you have to explicitly specify the implicit type with #, which spoils the readability.
Cannot infer the implicit parameter _ of _ whose type is "Type".
Is there a way to avoid this? Is it possible to specify something like default parameters, which will be substituted every time Coq cannot guess a type?

You can use a typeclass to implement this notion of default value:
Class Default (A : Type) (a : A) :=
withDefault { value : A }.
Arguments withDefault {_} {_}.
Arguments value {_} {_}.
Instance default (A : Type) (a : A) : Default A a :=
withDefault a.
Definition myNat `{dft : Default nat 3} : nat :=
value dft.
Eval cbv in myNat.
(* = 3 : nat *)
Eval cbv in (#myNat (withDefault 5)).
(* = 5 : nat *)

Related

Default Implementation of Coq typeclass methods

Is there a way you can provide default implementations of Coq typeclass methods like you can in Haskell? I saw no mention of this in the Coq typeclass documentation. If there does not exist such a feature, is there a common pattern for emulating this behavior?
A default implementation can be viewed as a function which constructs the default methods given other provided methods. So you can just define a function.
Class C a :=
{ m1 : a
; m2 : a
}.
(* Construct an instance of C from an implementation of only m1. *)
Definition mkC {a} (m1_ : a) := {| m1 := m1_ ; m2 := m1_ |}.
#[global]
Instance C_nat : C nat := mkC 0.
Another idea is to break the class into single-method classes. Then you can first define instances for the explicitly implemented methods, and then again use functions to obtain default implementations for other methods. By breaking down the class this way, you don't need to explicitly apply the default function to the provided methods.
Class N1 a :=
n1 : a.
Class N2 a :=
n2 : a.
(* Default implementation of N2 using N1 *)
Definition defaultN2 {a} {_ : N1 a} : N2 a := n1 (a := a).
#[global]
Instance N1_nat : N1 nat := 0.
#[global]
Instance N2_nat : N2 nat := defaultN2. (* N1 nat is passed implicitly here *)
I wish that you could do this, but it is not supported.

Access definitions from sublocale

In isabelle, I have the following sketch of a hierarchy of locales:
locale Base =
fixes foo :: "nat set"
begin
definition small :: "nat ⇒ nat set" where
"small n = {m ∈ foo. n * m < 100}"
end
locale Other =
fixes n :: nat
sublocale Other ⊆ Base "{..<n}" .
I now want to access the definition of small as interpreted in Other:
term Base.small (* works but parameters have to be provided *)
context Other begin term small end (* works, but needs context *)
term Other.small (* does not work *)
I would like the last line to work, or something equivalent so that I can refer to the correct definition of Other.small from global context. I reality, Other includes parameters, too, and I have a theorem that relates different interpretations of Other. For example I want to write something like this:
lemma card_small_lt: "n < m ==> card (Other.small n) < card (Other.small m)"
sorry
I have figured out a work around, but it leads to a bit of manual typing in the long run. First, I name the sublocale, then I introduce abbreviations in a context:
sublocale Other ⊆ B: Base "{..<n}" .
context Other
begin
abbreviation "small ≡ B.small"
end
term Other.small (* does now work! *)
The question remains, if the above can be automated, I'd rather not write an abbreviation for each definition I want to use from the outside.

Why both the typeclass and implicit argument mechanism?

So we can have explicit arguments, denoted by ().
We can also have implicit arguments, denoted by {}.
So far so good.
However, why do we also need the [] notation for type classes specifically?
What is the difference between the following two:
theorem foo {x : Type} : ∀s : inhabited x, x := sorry
theorem foo' {x : Type} [s : inhabited x] : x := sorry
Implicit arguments are inserted automatically by Lean's elaborator. The {x : Type} that appears in both of your definitions is one example of an implicit argument: if you have s : inhabited nat, then you can write foo s, which will elaborate to a term of type nat, because the x argument can be inferred from s.
Type class arguments are another kind of implicit argument. Rather than being inferred from later arguments, the elaborator runs a procedure called type class resolution that will attempt to generate a term of the designated type. (See chapter 10 of https://leanprover.github.io/theorem_proving_in_lean/theorem_proving_in_lean.pdf.) So, your foo' will actually take no arguments at all. If the expected type x can be inferred from context, Lean will look for an instance of inhabited x and insert it:
def foo' {x : Type} [s : inhabited x] : x := default x
instance inh_nat : inhabited nat := ⟨3⟩
#eval (2 : ℕ) + foo' -- 5
Here, Lean infers that x must be nat, and finds and inserts the instance of inhabited nat, so that foo' alone elaborates to a term of type nat.

Confused about Haskell polymorphic types

I have defined a function :
gen :: a -> b
So just trying to provide a simple implementation :
gen 2 = "test"
But throws error :
gen.hs:51:9:
Couldn't match expected type ‘b’ with actual type ‘[Char]’
‘b’ is a rigid type variable bound by
the type signature for gen :: a -> b at gen.hs:50:8
Relevant bindings include gen :: a -> b (bound at gen.hs:51:1)
In the expression: "test"
In an equation for ‘gen’: gen 2 = "test"
Failed, modules loaded: none.
So my function is not correct. Why is a not typed as Int and b not typed as String ?
This is a very common misunderstanding.
The key thing to understand is that if you have a variable in your type signature, then the caller gets to decide what type that is, not you!
So you cannot say "this function returns type x" and then just return a String; your function actually has to be able to return any possible type that the caller may ask for. If I ask your function to return an Int, it has to return an Int. If I ask it to return a Bool, it has to return a Bool.
Your function claims to be able to return any possible type, but actually it only ever returns String. So it doesn't do what the type signature claims it does. Hence, a compile-time error.
A lot of people apparently misunderstand this. In (say) Java, you can say "this function returns Object", and then your function can return anything it wants. So the function decides what type it returns. In Haskell, the caller gets to decide what type is returned, not the function.
Edit: Note that the type you're written, a -> b, is impossible. No function can ever have this type. There's no way a function can construct a value of type b out of thin air. The only way this can work is if some of the inputs also involve type b, or if b belongs to some kind of typeclass which allows value construction.
For example:
head :: [x] -> x
The return type here is x ("any possible type"), but the input type also mentions x, so this function is possible; you just have to return one of the values that was in the original list.
Similarly, gen :: a -> a is a perfectly valid function. But the only thing it can do is return it's input unchanged (i.e., what the id function does).
This property of type signatures telling you what a function does is a very useful and powerful property of Haskell.
gen :: a -> b does not mean "for some type a and some type b, foo must be of type a -> b", it means "for any type a and any type b, foo must be of type a -> b".
to motivate this: If the type checker sees something like let x :: Int = gen "hello", it sees that gen is used as String -> Int here and then looks at gen's type to see whether it can be used that way. The type is a -> b, which can be specialized to String -> Int, so the type checker decides that this is fine and allows this call. That is since the function is declared to have type a -> b, the type checker allows you to call the function with any type you want and allows you to use the result as any type you want.
However that clearly does not match the definition you gave the function. The function knows how to handle numbers as arguments - nothing else. And likewise it knows how to produce strings as its result - nothing else. So clearly it should not be possible to call the function with a string as its argument or to use the function's result as an Int. So since the type a -> b would allow that, it's clearly the wrong type for that function.
Your type signature gen :: a -> b is stating, that your function can work for any type a (and provide any type b the caller of the function demands).
Besides the fact that such a function is hard to come by, the line gen 2 = "test" tries to return a String which very well may not be what the caller demands.
Excellent answers. Given your profile, however, you seem to know Java, so I think it's valuable to connect this to Java as well.
Java offers two kinds of polymorphism:
Subtype polymorphism: e.g., every type is a subtype of java.lang.Object
Generic polymorphism: e.g., in the List<T> interface.
Haskell's type variables are a version of (2). Haskell doesn't really have a version of (1).
One way to think of generic polymorphism is in terms of templates (which is what C++ people call them): a type that has a type variable parameter is a template that can be specialized into a variety of monomorphic types. So for example, the interface List<T> is a template for constructing monomorphic interfaces like List<String>, List<List<String>> and so on, all of which have the same structure but differ only because the type variable T gets replaced uniformly throughout the signatures with the instantiation type.
The concept that "the caller chooses" that several responders have mentioned here is basically a friendly way of referring to instantiation. In Java, for example, the most common point where the type variable gets "chosen" is when an object is instantiated:
List<String> myList = new ArrayList<String>();
Second common point is that a subtype of a generic type may instantiate the supertype's variables:
class MyFunction implements Function<Integer, String> {
public String apply(Integer i) { ... }
}
Third one is methods that allow the caller to instantiate a variable that's not a parameter of its enclosing type:
/**
* Visitor-pattern style interface for a simple arithmetical language
* abstract syntax tree.
*/
interface Expression {
// The caller of `accept` implicitly chooses which type `R` is,
// by supplying a `Visitor<R>` with `R` instantiated to something
// of its choice.
<R> accept(Expression.Visitor<R> visitor);
static interface Visitor<R> {
R constant(int i);
R add(Expression a, Expression b);
R multiply(Expression a, Expression b);
}
}
In Haskell, instantiation is carried out implicitly by the type inference algorithm. In any expression where you use gen :: a -> b, type inference will infer what types need to be instantiated for a and b, given the context in which gen is used. So basically, "caller chooses" means that any code that uses gen controls the types to which a and b will be instantiated; if I write gen [()], then I'm implicitly instantiating a to [()]. The error here means that your type declaration says that gen [()] is allowed, but your equation gen 2 = "test" implies that it's not.
In Haskell, type variables are implicitly quantified, but we can make this explicit:
{-# LANGUAGE ScopedTypeVariables #-}
gen :: forall a b . a -> b
gen x = ????
The "forall" is really just a type level version of a lambda, often written Λ. So gen is a function taking three arguments: a type, bound to the name a, another type, bound to the name b, and a value of type a, bound to the name x. When your function is called, it is called with those three arguments. Consider a saner case:
fst :: (a,b) -> a
fst (x1,x2) = x1
This gets translated to
fst :: forall (a::*) (b::*) . (a,b) -> a
fst = /\ (a::*) -> /\ (b::*) -> \ (x::(a,b)) ->
case x of
(x1, x2) -> x1
where * is the type (often called a kind) of normal concrete types. If I call fst (3::Int, 'x'), that gets translated into
fst Int Char (3Int, 'x')
where I use 3Int to represent specifically the Int version of 3. We could then calculate it as follows:
fst Int Char (3Int, 'x')
=
(/\ (a::*) -> /\ (b::*) -> \(x::(a,b)) -> case x of (x1,x2) -> x1) Int Char (3Int, 'x')
=
(/\ (b::*) -> \(x::(Int,b)) -> case x of (x1,x2) -> x1) Char (3Int, 'x')
=
(\(x::(Int,Char)) -> case x of (x1,x2) -> x1) (3Int, x)
=
case (3Int,x) of (x1,x2) -> x1
=
3Int
Whatever types I pass in, as long as the value I pass in matches, the fst function will be able to produce something of the required type. If you try to do this for a->b, you will get stuck.

Why does OCaml sometimes require eta expansion?

If I have the following OCaml function:
let myFun = CCVector.map ((+) 1);;
It works fine in Utop, and Merlin doesn't mark it as a compilation error. When I try to compile it, however, I get the following error:
Error: The type of this expression,
(int, '_a) CCVector.t -> (int, '_b) CCVector.t,
contains type variables that cannot be generalized
If I eta-expand it however then it compiles fine:
let myFun foo = CCVector.map ((+) 1) foo;;
So I was wondering why it doesn't compile in eta-reduced form, and also why the eta-reduced form seems to work in the toplevel (Utop) but not when compiling?
Oh, and the documentation for CCVector is here. The '_a part can be either `RO or `RW, depending whether it is read-only or mutable.
What you got here is the value polymorphism restriction of ML language family.
The aim of the restriction is to settle down let-polymorphism and side effects together. For example, in the following definition:
let r = ref None
r cannot have a polymorphic type 'a option ref. Otherwise:
let () =
r := Some 1; (* use r as int option ref *)
match !r with
| Some s -> print_string s (* this time, use r as a different type, string option ref *)
| None -> ()
is wrongly type-checked as valid, but it crashes, since the reference cell r is used for these two incompatible types.
To fix this issue many researches were done in 80's, and the value polymoprhism is one of them. It restricts polymorphism only to let bindings whose definition form is "non-expansive". Eta expanded form is non expansive therefore your eta expanded version of myFun has a polymorphic type, but not for eta reduced one. (More precisely speaking, OCaml uses a relaxed version of this value polymorphism, but the story is basically the same.)
When the definition of let binding is expansive there is no polymorphism introduced therefore the type variables are left non-generalized. These types are printed as '_a in the toplevel, and their intuitive meaning is: they must be instantiated to some concrete type later:
# let r = ref None (* expansive *)
val r : '_a option ref = {contents = None} (* no polymorphism is allowed *)
(* type checker does not reject this,
hoping '_a is instantiated later. *)
We can fix the type '_a after the definition:
# r := Some 1;; (* fixing '_a to int *)
- : unit = ()
# r;;
- : int option ref = {contents = Some 1} (* Now '_a is unified with int *)
Once fixed, you cannot change the type, which prevents the crash above.
This typing delay is permitted until the end of the typing of the compilation unit. The toplevel is a unit which never ends and therefore you can have values with '_a type variables anywhere of the session. But in the separated compilation, '_a variables must be instantiated to some type without type variables till the end of ml file:
(* test.ml *)
let r = ref None (* r : '_a option ref *)
(* end of test.ml. Typing fails due to the non generalizable type variable remains. *)
This is what is happening with your myFun function with the compiler.
AFAIK, there is no perfect solution to the problem of polymorphism and side effects. Like other solutions, the value polymorphism restriction has its own drawback: if you want to have a polymorphic value, you must make the definition in non-expansive: you must eta-expand myFun. This is a bit lousy but is considered acceptable.
You can read some other answers:
http://caml.inria.fr/pub/old_caml_site/FAQ/FAQ_EXPERT-eng.html#variables_de_types_faibles
What is the difference between 'a and '_l?
or search by like "value restriction ml"

Resources