Why both the typeclass and implicit argument mechanism? - lean

So we can have explicit arguments, denoted by ().
We can also have implicit arguments, denoted by {}.
So far so good.
However, why do we also need the [] notation for type classes specifically?
What is the difference between the following two:
theorem foo {x : Type} : ∀s : inhabited x, x := sorry
theorem foo' {x : Type} [s : inhabited x] : x := sorry

Implicit arguments are inserted automatically by Lean's elaborator. The {x : Type} that appears in both of your definitions is one example of an implicit argument: if you have s : inhabited nat, then you can write foo s, which will elaborate to a term of type nat, because the x argument can be inferred from s.
Type class arguments are another kind of implicit argument. Rather than being inferred from later arguments, the elaborator runs a procedure called type class resolution that will attempt to generate a term of the designated type. (See chapter 10 of https://leanprover.github.io/theorem_proving_in_lean/theorem_proving_in_lean.pdf.) So, your foo' will actually take no arguments at all. If the expected type x can be inferred from context, Lean will look for an instance of inhabited x and insert it:
def foo' {x : Type} [s : inhabited x] : x := default x
instance inh_nat : inhabited nat := ⟨3⟩
#eval (2 : ℕ) + foo' -- 5
Here, Lean infers that x must be nat, and finds and inserts the instance of inhabited nat, so that foo' alone elaborates to a term of type nat.

Related

How can Haskell integer literals be comparable without being in the Eq class?

In Haskell (at least with GHC v8.8.4), being in the Num class does NOT imply being in the Eq class:
$ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
λ>
λ> let { myEqualP :: Num a => a -> a -> Bool ; myEqualP x y = x==y ; }
<interactive>:6:60: error:
• Could not deduce (Eq a) arising from a use of ‘==’
from the context: Num a
bound by the type signature for:
myEqualP :: forall a. Num a => a -> a -> Bool
at <interactive>:6:7-41
Possible fix:
add (Eq a) to the context of
the type signature for:
myEqualP :: forall a. Num a => a -> a -> Bool
• In the expression: x == y
In an equation for ‘myEqualP’: myEqualP x y = x == y
λ>
It seems this is because for example Num instances can be defined for some functional types.
Furthermore, if we prevent ghci from overguessing the type of integer literals, they have just the Num type constraint:
λ>
λ> :set -XNoMonomorphismRestriction
λ>
λ> x=42
λ> :type x
x :: Num p => p
λ>
Hence, terms like x or 42 above have no reason to be comparable.
But still, they happen to be:
λ>
λ> y=43
λ> x == y
False
λ>
Can somebody explain this apparent paradox?
Integer literals can't be compared without using Eq. But that's not what is happening, either.
In GHCi, under NoMonomorphismRestriction (which is default in GHCi nowadays; not sure about in GHC 8.8.4) x = 42 results in a variable x of type forall p :: Num p => p.1
Then you do y = 43, which similarly results in the variable y having type forall q. Num q => q.2
Then you enter x == y, and GHCi has to evaluate in order to print True or False. That evaluation cannot be done without picking a concrete type for both p and q (which has to be the same). Each type has its own code for the definition of ==, so there's no way to run the code for == without deciding which type's code to use.3
However each of x and y can be used as any type in Num (because they have a definition that works for all of them)4. So we can just use (x :: Int) == y and the compiler will determine that it should use the Int definition for ==, or x == (y :: Double) to use the Double definition. We can even do this repeatedly with different types! None of these uses change the type of x or y; we're just using them each time at one of the (many) types they support.
Without the concept of defaulting, a bare x == y would just produce an Ambiguous type variable error from the compiler. The language designers thought that would be extremely common and extremely annoying with numeric literals in particular (because the literals are polymorphic, but as soon as you do any operation on them you need a concrete type). So they introduced rules that some ambiguous type variables should be defaulted to a concrete type if that allows compilation to continue.5
So what is actually happening when you do x == y is that the compiler is just picking Integer to use for x and y in that particular expression, because you haven't given it enough information to pin down any particular type (and because the defaulting rules apply in this situation). Integer has an Eq instance so it can use that, even though the most general types of x and y don't include the Eq constraint. Without picking something it couldn't possibly even attempt to call == (and of course the "something" it picks has to be in Eq or it still won't work).
If you turn on -Wtype-defaults (which is included in -Wall), the compiler will print a warning whenever it applies defaulting6, which makes the process more visible.
1 The forall p part is implicit in standard Haskell, because all type variables are automatically introduced with forall at the beginning of the type expression in which they appear. You have to turn on extensions to even write the forall manually; either ExplicitForAll just for the ability to write forall, or any one of the many extensions that actually add functionality that makes forall useful to write explicitly.
2 GHCi will probably pick p again for the type variable, rather than q. I'm just using a different one to emphasise that they're different variables.
3 Technically it's not each type that necessarily has a different ==, but each Eq instance. Some of those instances are polymorphic, so they apply to multiple types, but that only really comes up with types that have some structure (like Maybe a, etc). Basic types like Int, Integer, Double, Char, Bool, each have their own instance, and each of those instances has its own code for ==.
4 In the underlying system, a type like forall p. Num p => p is in fact much like a function; one that takes a Num instance for a concrete type as a parameter. To get a concrete value you have to first "apply the function" to a type's Num instance, and only then do you get an actual value that could be printed, compared with other things, etc. In standard Haskell these instance parameters are always invisibly passed around by the compiler; some extensions allow you to manipulate this process a little more directly.
This is the root of what's confusing about why x == y works when x and y are polymorphic variables. If you had to explicitly pass around the type/instance arguments it would be obvious what's going on here, because you would have to manually apply both x and y to something and compare the results.
5 The gist of the default rules is that if the constraints on an ambiguous type variable are:
all built-in classes
at least one of them is a numeric class (Num, Floating, etc)
then GHC will try Integer to see if that type checks and allows all other constraints to be resolved. If that doesn't work it will try Double, and if that doesn't work then it reports an error.
You can set the types it will try with a default declaration (the "default default" being default (Integer, Double)), but you can't customise the conditions under which it will try to default things, so changing the default types is of limited use in my experience.
GHCi however comes with extended default rules that are a bit more useful in an interpreter (because it has to do type inference line-by-line instead of on the whole module at once). You can turn those on in compiled code with ExtendedDefaultRules extension (or turn them off in GHCi with NoExtendedDefaultRules), but again, neither of those options is particularly useful in my experience. It's annoying that the interpreter and the compiler behave differently, but the fundamental difference between module-at-a-time compilation and line-at-a-time interpretation mean that switching either's default rules to work consistently with the other is even more annoying. (This is also why NoMonomorphismRestriction is in effect by default in the interpreter now; the monomorphism restriction does a decent job at achieving its goals in compiled code but is almost always wrong in interpreter sessions).
6 You can also use a typed hole in combination with the asTypeOf helper to get GHC to tell you what type it's inferring for a sub-expression like this:
λ :t x
x :: Num p => p
λ :t y
y :: Num p => p
λ (x `asTypeOf` _) == y
<interactive>:19:15: error:
• Found hole: _ :: Integer
• In the second argument of ‘asTypeOf’, namely ‘_’
In the first argument of ‘(==)’, namely ‘(x `asTypeOf` _)’
In the expression: (x `asTypeOf` _) == y
• Relevant bindings include
it :: Bool (bound at <interactive>:19:1)
Valid hole fits include
x :: forall p. Num p => p
with x
(defined at <interactive>:1:1)
it :: forall p. Num p => p
with it
(defined at <interactive>:10:1)
y :: forall p. Num p => p
with y
(defined at <interactive>:12:1)
You can see it tells us nice and simply Found hole: _ :: Integer, before proceeding with all the extra information it likes to give us about errors.
A typed hole (in its simplest form) just means writing _ in place of an expression. The compiler errors out on such an expression, but it tries to give you information about what you could use to "fill in the blank" in order to get it to compile; most helpfully, it tells you the type of something that would be valid in that position.
foo `asTypeOf` bar is an old pattern for adding a bit of type information. It returns foo but it restricts (this particular usage of) it to be the same type as bar (the actual value of bar is totally unused). So if you already have a variable d with type Double, x `asTypeOf` d will be the value of x as a Double.
Here I'm using asTypeOf "backwards"; instead of using the thing on the right to constrain the type of the thing on the left, I'm putting a hole on the right (which could have any type), but asTypeOf conveniently makes sure it's the same type as x without otherwise changing how x is used in the overall expression (so the same type inference still applies, including defaulting, which isn't always the case if you lift a small part of a larger expression out to ask GHCi for its type with :t; in particular :t x won't tell us Integer, but Num p => p).

What is the difference between AppTy and FunTy in Haskell Core type definitions?

Reading Core Haskell Type, and would really appreciate if someone could explain the difference between FunTy and AppTy with examples. Don't know what the norms are, but including the relevant portion from the page (let me know if this is discouraged):
type TyVar = Var
data Type = TyVarTy TyVar -- Type variable
| AppTy Type Type -- Application
| TyConApp TyCon [Type] -- Type constructor application
| FunTy Type Type -- Arrow type
| ForAllTy Var Type -- Polymorphic type
| LitTy TyLit -- Type literals
data TyLit = NumTyLit Integer -- A number
| StrTyLit FastString -- A string
A Type is an AST for a core type-level expression, like Int, or Maybe Int, or a -> Maybe (a,b) or whatever.
All of AppTy, TyConApp, and FunTy represent the application of one type-level expression to another type-level expression.
The most general is AppTy Type Type, which represents the application of any Type (i.e., type-level expression) to any other Type (i.e., type-level expression).
A FunTy Type Type is a special case of type application, representing the application of a specific constructor, namely the (->) constructor to its two Type arguments. The type-level expression FunTy x y is semantically equivalent to AppTy (AppTy arrowType x) y where arrowType is some Type representing the function type constructor (->). (In the actual GHC code, the FunTy constructor is more complicated than the simplified version presented in the Wiki, but you can ignore that for now.)
A TyConApp is another special case, representing the application of any constructor (type TyCon) to zero or more arguments. A TyConApp need not be saturated, so the following Types are semantically equivalent:
TyConApp pairTyCon [x, y]
AppTy (TyConApp pairTyCon [x]) y
AppTy (AppTy (TyConApp pairTyCon []) x) y
In particular, if maybeTyCon :: TyCon, pairTyCon :: TyCon, and arrowTyCon :: TyCon represent the type constructors for Maybe, (,), and (->) respectively, and aVar, bVar :: Var represent the type variables a and b respectively then the type expression a -> Maybe (a,b) would most likely be represented as:
FunTy (TyVarTy aVar) (TyConApp maybeTyCon [TyVarTy aVar, TyVarTy bVar])
but could also be represented as:
AppTy (TyConApp arrowTyCon [TyVarTy aVar])
(TyConApp Maybe [AppTy (AppTy (TyConApp pairTyCon []) (TyVarTy aVar)) (TyVarTy bVar)])
Which way a given application is represented and how long it gets represented that way depends on how it was formed and various internal compiler rules/invariants. It may be helpful to look at the actual compiler source (.../GHC/Core/TyCo/Rep.hs). For example, a comment in the source indicates that AppTy's first field isn't allowed to be a TyConApp, which disallows most of the "equivalent" representations I've described above, though they are disallowed by internal compiler rules/invariants, not by the Type representation itself.

Coq: set default implicit parameters

Suppose I have a code with a lot of modules and sections. In some of them there are polymorphic definitions.
Module MyModule.
Section MyDefs.
(* Implicit. *)
Context {T: Type}.
Inductive myIndType: Type :=
| C : T -> myIndType.
End MyDefs.
End MyModule.
Module AnotherModule.
Section AnotherSection.
Context {T: Type}.
Variable P: Type -> Prop.
(* ↓↓ ↓↓ - It's pretty annoying. *)
Lemma lemma: P (#myIndType T).
End AnotherSection.
End AnotherModule.
Usually Coq can infer the type, but often I still get typing error. In such cases, you have to explicitly specify the implicit type with #, which spoils the readability.
Cannot infer the implicit parameter _ of _ whose type is "Type".
Is there a way to avoid this? Is it possible to specify something like default parameters, which will be substituted every time Coq cannot guess a type?
You can use a typeclass to implement this notion of default value:
Class Default (A : Type) (a : A) :=
withDefault { value : A }.
Arguments withDefault {_} {_}.
Arguments value {_} {_}.
Instance default (A : Type) (a : A) : Default A a :=
withDefault a.
Definition myNat `{dft : Default nat 3} : nat :=
value dft.
Eval cbv in myNat.
(* = 3 : nat *)
Eval cbv in (#myNat (withDefault 5)).
(* = 5 : nat *)

Wrapping / Unwrapping Universally Quantified Types

I have imported a data type, X, defined as
data X a = X a
Locally, I have defined a universally quantified data type, Y
type Y = forall a. X a
Now I need to define two functions, toY and fromY. For fromY, this definition works fine:
fromY :: Y -> X a
fromY a = a
but if I try the same thing for toY, I get an error
Couldn't match type 'a' with 'a0'
'a' is a rigid type variable bound by the type signature for 'toY :: X a -> y'
'a0' is a rigid type variable bound by the type signature for 'toY :: X a -> X a0'
Expected type: X a0
Actual type: X a
If I understand correctly, the type signature for toY is expanding to forall a. X a -> (forall a0. X a0) because Y is defined as a synonym, rather than a newtype, and so the two as in the definitions don't match up.
But if this is the case, then why does fromY type-check successfully? And is there any way around this problem other than using unsafeCoerce?
You claim to define an existential type, but you do not.
type Y = forall a. X a
defines a universally quantified type. For something to have type Y, it must have type X a for every a. To make an existentially quantified type, you always need to use data, and I find the GADT syntax easier to understand than the traditional existential one.
data Y where
Y :: forall a . X a -> Y
The forall there is actually optional, but I think clarifies things.
I'm too sleepy right now to work out your other questions, but I'll try again tomorrow if no one else does.
Remark:
This is more like a comment but I could not really put it there as it would have been unreadable; please forgive me this one time.
Aside from what dfeuer already told you, you might see (when you use his answer) that toY is now really easy to do but you might have trouble defining fromY – because you basically lose the type-information, so this will not work:
{-# LANGUAGE GADTs #-}
module ExTypes where
data X a = X a
data Y where
Y :: X a -> Y
fromY :: Y -> X a
fromY (Y a) = a
as here you have two different as – one from the constructor Y and one from X a – indeed if you strip the definition and try to compile: fromY (Y a) = a the compiler will tell you that the type a escapes:
Couldn't match expected type `t' with actual type `X a'
because type variable `a' would escape its scope
This (rigid, skolem) type variable is bound by
a pattern with constructor
Y :: forall a. X a -> Y,
in an equation for `fromY'
I think the only thing you will have left now will be something like this:
useY :: (forall a . X a -> b) -> Y -> b
useY f (Y x) = f x
but this might prove not to be too useful.
The thing is that you normally should constrain the forall a there a bit more (with type-classes) to get any meaningful behavior – but of course I cannot help here.
This wiki article might be interesting for you on the details.

What Justification for the type of f x = f x in Haskell is there?

Haskell gives f x = f x the type of t1 -> t, but could someone explain why?
And, is it possible for any other, nonequivalent function to have this same type?
Okay, starting from the function definition f x = f x, let's step through and see what we can deduce about the type of f.
Start with a completely unspecified type variable, a. Can we deduce more than that? Yes, we observe that f is a function taking one argument, so we can change a into a function between two unknown type variables, which we'll call b -> c. Whatever type b stands for is the type of the argument x, and whatever type c stands for must be the type of the right-hand side of the definition.
What can we figure out about the right-hand side? Well, we have f, which is a recursive reference to the function we're defining, so its type is still b -> c, where both type variables are the same as for the definition of f. We also have x, which is a variable bound within the definition of f and has type b. Applying f to x type checks, because they're sharing the same unknown type b, and the result is c.
At this point everything fits together and with no other restrictions, we can make the type variables "official", resulting in a final type of b -> c where both variables are the usual, implicitly universally quantified type variables in Haskell.
In other words, f is a function that takes an argument of any type and returns a value of any type. How can it return any possible type? It can't, and we can observe that evaluating it produces only an infinite recursion.
For the same reason, any function with the same type will be "equivalent" in the sense of never returning when evaluated.
An even more direct version is to remove the argument entirely:
foo :: a
foo = foo
...which is also universally quantified and represents a value of any type. This is pretty much equivalent to undefined.
f x = undefined
has the (alpha) equivalent type f :: t -> a.
If you're curious, Haskell's type system is derived from Hindley–Milner. Informally, the typechecker starts off with the most permissive types for everything, and unifies the various constraints until what remains is consistent (or not). In this case, the most general type is f :: t1 -> t, and there's no additional constraints.
Compare to
f x = f (f x)
which has inferred type f :: t -> t, due to unifying the types of the argument of f on the LHS and the argument to the outer f on the RHS.

Resources