What is a "System FC2 grammar for Kinds"? - haskell

I'm trying to wrap my head around this blog post about the ConstraintKinds extension.
There was a post in the comment section which I totally did not understand. Here it is:
Adam M says: 14 September 2011 19:53 UTC
Wow, this sounds great. Is it scheduled to be part of the official GHC 7.4?
Also, does this mean that you've introduced a third production in the in System FC2 grammar for Kinds? Currently it has * and k~>k as the only alternatives where k1~>k2 is (basically) the kind of (forall a::k1 . (t::k2)). It sounds like this would add k1==>k2 which is the kind of (a::k1 => (t::k2)). Or are the two kinds actually the same?
Could someone, please analyze this step-by-step or at least provide some links which would help me wrap my head around this myself. Some key moments I should pinpoint:
What is a "System FC2 grammar for Kinds"? (Probably the main and the most general one, whose answer would embed the two other ones.)
I tried explaining why "k1~>k2 is (basically) the kind of (forall a::k1 . (t::k2))"? As far as I understand, ~> is some special notation for -> in kinds, as * and k1 -> k2 the only inhabitants of the standard Haskell's kind system (fits their description: "Currently it has * and k~>k as the only alternatives"). Thus, the (forall a::k1 . (t::k2)) formula means that if we take an inhabited type k1, it can be mapped onto another k2 iff it is inhabited (due to Curry-Howard working for kinds the same way it works for types). Is that right? (P.S.: I see how this intuition fails if I do not understand the notion of inhabitance for kinds; do kinds correspond to True provable formulae (see comments) when they have an inhabited type as an inhabitant or an arbitrary type? The intuition fails in the second case.)
What does the => mean in the formula for k1==>k2, namely (a::k1 => (t::k2))?
The response this comment got:
Max says:
14 September 2011 21:11 UTC
Adam: it's not that complicated! It just adds the base kind Constraint to the grammar of kinds. This is a kind of types inhabited by values, just like the existing kinds * and #.
So the author claims that Adam M overcomplicated the extension. Their response is quite easy to understand. Anyway, even if Adam M's comment is not true, I think it is totally worth attention as it introduced some unfamiliar concepts to me.

"System FC2" is a term coined by Weirich et al in their 2010 paper "Generative type abstraction and type-level computation" (link). It refers to the addition of "roles" to System FC and formed the basis for the implementation in GHC described in the 2016 paper "Safe Zero-cost Coercions for Haskell. System FC, in turn, is the system originally described in this paper (or actually an earlier paper of which this is post-publication extended version), which extended the usual polymorphic lambda calculus of System F with type equalities.
However, I think Adam M was probably using the term "System FC2" less formally to refer to whatever type system GHC was implementing at the time the comment was written. So, the meaning of the phrase:
introduced a third production in the System FC2 grammar for Kinds
is really:
added a third production rule to the grammar of kinds, as kinds are currently implemented in GHC
His claim was that the grammar for kinds currently had two production rules:
* is a kind
If k1 and k2 are kinds, then k1 ~> k2 is a kind.
and he was asking if this extension gave a third production rule:
If k1 and k2 are kinds, then k1 ==> k2 is a kind.
As you've guessed, he introduced the operator ~> to differentiate the kind-level arrow from the type-level arrow. (In GHC, both the kind-level and type-level arrow operators are written the same way ->.) He gave a definition of ~> as:
where k1~>k2 is (basically) the kind of (forall a::k1 . (t::k2)).
which is interpretable, but very imprecise. He was trying to use forall here as a sort of type-level lambda. It's not, but you can imagine that if you had a type forall a. t, you could instantiate it at a specific type a, and if for all a :: k1 you get t :: k2, then this polymorphic type sort of represents an implicit type function of kind k1 ~> k2. But the polymorphism / universal quantification is irrelevant here. What's important is how a appears in the expression t, and the extent to which you can express the type-level expression t as, say, a type-level function:
type Whatever a = t
or if Haskell had type-level lambdas, a type-level lambda with a as an argument and t as its body:
Lambda a. t
You won't get anywhere by trying to seriously consider forall a. t as having kind k1 -> k2.
Based on this loose interpretation of ~>, he tried to ask if there was a new, kind-level operator ==> such that the relationship between the kind-level operator ~> and the type-level expression forall a. b was the same as the relationship between a new hypothetical kind-level operator ==> and the type-level expression a => b. I think the only reasonable way to interpret this question is to imagine that he wanted to consider the type expression a => b as being parameterized by a, the same way he was imagining forall a. b as being parameterized by a, so he wanted to consider a type-level function of the form:
type Something a = a => b
and consider the kind of Something. Here, the kind of Something is Constraint ~> *. So, I guess the answer to his final question is, "the two kinds are actually the same", and no other kind-level operator besides ~> is needed.
Max's reply explained that the extension didn't add any new kind-level operator but merely added a new primitive kind, Constraint at the same grammatical level as the kinds * and #. The kind-level ~> operator has the same relationship to type-level application f a whether the primitive kinds involved are * or # or Constratin. So, for example, given:
{-# LANGUAGE ConstraintKinds, RankNTypes #-}
type Whatever a = Maybe [a]
type Something a = a => Int
the kinds of Whatever and Something are both expressed in terms of the kind operator ~> (in GHC, written simply ->):
λ> :kind Whatever
Whatever :: * -> *
λ> :kind Something
Something :: Constraint -> *

Related

What does the star (*) in instance documentation mean? [duplicate]

Browsing the haddocks of various packages I often come along instance documentations that look like this (Control.Category):
Category k (Coercion k)
Category * (->)
or this (Control.Monad.Trans.Identity):
MonadTrans (IdentityT *)
What exactly here does the kind signature mean? It doesn't show up in the source, but I have already noticed that it seems to occur in modules that use the PolyKinds extension. I suspect it is probably like a TypeApplication but with a kind. So that e.g. the last example means that IdentityT is a monad transformer if it's first argument has kind *.
So my questions are:
Is my interpretation correct and what exactly does the kind signature refer to?
In the first Category instance, how am I supposed to know that k is a kind and not a type? Or do I just have to know the arity of Category?
What is the source code analog to this syntax?
I am not asking for an explanation of kinds.
To quote Richard Eisenberg’s recent post on the haskell-cafe mailing list:
Haddock struggles sometimes to render types with -XPolyKinds enabled. The problem is that GHC generally does not require kind arguments to be written and it does not print them out (unless you say -fprint-explicit-kinds). But Haddock, I believe, prints out kinds whenever -XPolyKinds is on. So the two different definitions are really the same: it's just that one module has -XPolyKinds and the other doesn't.
The * is the kind of ordinary types. So Int has kind * (we write Int :: *) while Maybe has kind * -> *. Typeable actually has kind forall k. k -> Constraint, meaning that it's polykinded. In the first snippet below, the * argument to Typeable instantiates k with *, because type variable a has kind *.
So yes, as you guessed, it has to do with PolyKinds. Haddock renders these poly-kinded types with a sort of “explicit kind application”. It just so happens that Category is poly-kinded, having the kind forall k. (k -> k -> *) -> Constraint, so Haddock renders the kind application alongside each instance.
In my opinion, this is a bug or misfeature of Haddock, since there is no equivalent source code analog as far as I know. It is confusing, and I don’t know of a better way to understand it than to recognize the way it usually manifests and visually infer what’s going on from the context.

What is predicativity?

I have pretty decent intuition about types Haskell prohibits as "impredicative": namely ones where a forall appears in an argument to a type constructor other than ->. But just what is predicativity? What makes it important? How does it relate to the word "predicate"?
The central question of these type systems is: "Can you substitute a polymorphic type in for a type variable?". Predicative type systems are the no-nonsense schoolmarm answering, "ABSOLUTELY NOT", while impredicative type systems are your carefree buddy who thinks that sounds like a fun idea and what could possibly go wrong?
Now, Haskell muddies the discussion a bit because it believes polymorphism should be useful but invisible. So for the remainder of this post, I will be writing in a dialect of Haskell where uses of forall are not just allowed but required. This way we can distinguish between the type a, which is a monomorphic type which draws its value from a typing environment that we can define later, and the type forall a. a, which is one of the harder polymorphic types to inhabit. We'll also allow forall to go pretty much anywhere in a type -- as we'll see, GHC restricts its type syntax as a "fail-fast" mechanism rather than as a technical requirement.
Suppose we have told the compiler id :: forall a. a -> a. Can we later ask to use id as if it had type (forall b. b) -> (forall b. b)? Impredicative type systems are okay with this, because we can instantiate the quantifier in id's type to forall b. b, and substitute forall b. b for a everywhere in the result. Predicative type systems are a bit more wary of that: only monomorphic types are allowed in. (So if we had a particular b, we could write id :: b -> b.)
There's a similar story about [] :: forall a. [a] and (:) :: forall a. a -> [a] -> [a]. While your carefree buddy may be okay with [] :: [forall b. b] and (:) :: (forall b. b) -> [forall b. b] -> [forall b. b], the predicative schoolmarm isn't, so much. In fact, as you can see from the only two constructors of lists, there is no way to produce lists containing polymorphic values without instantiating the type variable in their constructors to a polymorphic value. So although the type [forall b. b] is allowed in our dialect of Haskell, it isn't really sensible -- there's no (terminating) terms of that type. This motivates GHC's decision to complain if you even think about such a type -- it's the compiler's way of telling you "don't bother".*
Well, what makes the schoolmarm so strict? As usual, the answer is about keeping type-checking and type-inference doable. Type inference for impredicative types is right out. Type checking seems like it might be possible, but it's bloody complicated and nobody wants to maintain that.
On the other hand, some might object that GHC is perfectly happy with some types that appear to require impredicativity:
> :set -Rank2Types
> :t id :: (forall b. b) -> (forall b. b)
{- no complaint, but very chatty -}
It turns out that some slightly-restricted versions of impredicativity are not too bad: specifically, type-checking higher-rank types (which allow type variables to be substituted by polymorphic types when they are only arguments to (->)) is relatively simple. You do lose type inference above rank-2, and principal types above rank-1, but sometimes higher rank types are just what the doctor ordered.
I don't know about the etymology of the word, though.
* You might wonder whether you can do something like this:
data FooTy a where
FooTm :: FooTy (forall a. a)
Then you would get a term (FooTm) whose type had something polymorphic as an argument to something other than (->) (namely, FooTy), you don't have to cross the schoolmarm to do it, and so the belief "applying non-(->) stuff to polymorphic types isn't useful because you can't make them" would be invalidated. GHC doesn't let you write FooTy, and I will admit I'm not sure whether there's a principled reason for the restriction or not.
(Quick update some years later: there is a good, principled reason that FooTm is still not okay. Namely, the way that GADTs are implemented in GHC is via type equalities, so the expanded type of FooTm is actually FooTm :: forall a. (a ~ forall b. b) => FooTy a. Hence to actually use FooTm, one would indeed need to instantiate a type variable with a polymorphic type. Thanks to Stephanie Weirich for pointing this out to me.)
Let me just add a point regarding the "etymology" issue, since the other answer by #DanielWagner covers much of the technical ground.
A predicate on something like a is a -> Bool. Now a predicate logic is one that can in some sense reason about predicates -- so if we have some predicate P and we can talk about, for a given a, P(a), now in a "predicate logic" (such as first-order logic) we can also say ∀a. P(a). So we can quantify over variables and discuss the behavior of predicates over such things.
Now, in turn, we say a statement is predicative if all of the things a predicate is applied to are introduced prior to it. So statements are "predicated on" things that already exist. In turn, a statement is impredicative if it can in some sense refer to itself by its "bootstraps".
So in the case of e.g. the id example above, we find that we can give a type to id such that it takes something of the type of id to something else of the type of id. So now we can give a function a type where an quantified variable (introduced by forall a.) can "expand" to be the same type as that of the entire function itself!
Hence impredicativity introduces a possibility of a certain "self reference". But wait, you might say, wouldn't such a thing lead to contradiction? The answer is: "well, sometimes." In particular, "System F" which is the polymorphic lambda calculus and the essential "core" of GHC's "core" language allows a form of impredicativity that nonetheless has two levels -- the value level, and the type level, which is allowed to quantify over itself. In this two-level stratification, we can have impredicativity and not contradiction/paradox.
Although note that this neat trick is very delicate and easy to screw up by the addition of more features, as this collection of articles by Oleg indicates: http://okmij.org/ftp/Haskell/impredicativity-bites.html
I'd like to make a comment on the etymology issue, since #sclv's answer isn't quite right (etymologically, not conceptually).
Go back in time, to the days of Russell when everything is set theory— including logic. One of the logical notions of particular import is the "principle of comprehension"; that is, given some logical predicate φ:A→2 we would like to have some principle to determine the set of all elements satisfying that predicate, written as "{x | φ(x) }" or some variation thereon. The key point to bear in mind is that "sets" and "predicates" are viewed as being fundamentally different things: predicates are mappings from objects to truth values, and sets are objects. Thus, for example, we may allow quantifying over sets but not quantifying over predicates.
Now, Russell was rather concerned by his eponymous paradox, and sought some way to get rid of it. There are numerous fixes, but the one of interest here is to restrict the principle of comprehension. But first, the formal definition of the principle: ∃S.∀x.S x ↔︎ φ(x); that is, for our particular φ there exists some object (i.e., set) S such that for every object (also a set, but thought of as an element) x, we have that S x (you can think of this as meaning "x∈S", though logicians of the time gave "∈" a different meaning than mere juxtaposition) is true just in case φ(x) is true. If we take the principle exactly as written then we end up with an impredicative theory. However, we can place restrictions on which φ we're allowed to take the comprehension of. (For example, if we say that φ must not contain any second-order quantifiers.) Thus, for any restriction R, if a set S is determined (i.e., generated via comprehension) by some R-predicate, then we say that S is "R-predicative". If every set in our language is R-predicative then we say that our language is "R-predicative". And then, as is often the case with hyphenated prefix things, the prefix gets dropped off and left implicit, whence "predicative" languages. And, naturally, languages which are not predicative are "impredicative".
That's the old school etymology. Since those days the terms have gone off and gotten lives of their own. The ways we use "predicative" and "impredicative" today are quite different, because the things we're concerned about have changed. So it can sometimes be a bit hard to see how the heck our modern usage ties back to this stuff. Honestly, I don't think knowing the etymology really helps any in terms of figuring out what the words are really about (these days).

Are typeclasses essential?

I once asked a question on haskell beginners, whether to use data/newtype or a typeclass. In my particular case it turned out that no typeclass was required. Additionally Tom Ellis gave me a brilliant advice, what to do when in doubt:
The simplest way of answering this which is mostly correct is:
use data
I know that typeclasses can make a few things a bit prettier, but not much AFIK. It also strikes me that typeclasses are mostly used for brain stem stuff, wheras in newer stuff, new typeclasses hardly ever get introduced and everything is done with data/newtype.
Now I wonder if there are cases where typeclasses are absolutely required and things could not be expressed with data/newtype?
Answering a similar question on StackOverflow Gabriel Gonzales said
Use type classes if:
There is only one correct behavior per given type
The type class has associated equations (i.e. "laws") that all instances must satisfy
Hmm ..
Or are typeclasses and data/newtype somewhat competing concepts which coexist for historical reasons?
I would argue that typeclasses are an essential part of Haskell.
They are the part of Haskell that makes it the easiest language I know of to refactor, and they are a great asset to your being able to reason about the correctness of code.
So, let's talk about dictionary passing.
Now, any sort of dictionary passing is a big improvement in the state of affairs in traditional object oriented languages. We know how to do OOP with vtables in C++. However, the vtable is 'part of the object' in OOP languages. Fusing the vtable with the object forces your code into a form where you have a rigid discipline about who can extend the core types with new features, its really only the original author of the class who has to incorporate all the things others want to bake into their type. This leads to "lava flow code" and all sorts of other design antipatterns, etc.
Languages like C# give you the ability to hack in extension methods to fake new stuff, and "traits" in languages like scala and multiple inheritance in other languages let you delegate some of the work as well, but they are partial solutions.
When you split the vtable from the objects they manipulate you get a heady rush of power. You can now pass them around wherever you want, but then of course you need to name them and talk about them. The ML discipline around modules / functors and the explicit dictionary passing style take this approach.
Typeclasses take a slightly different tack. We rely on uniqueness of a typeclass instance for a given type and it is in large part it is this choice permits us to get away with such simple core data types.
Why?
Because we can move the use of the dictionaries to the use sites, and don't have to carry them around with the data types and we can rely upon the fact that when we do so nothing has changed about the behavior of the code.
Mechanical translation of the code to more complex manually passed dictionaries loses the uniqueness of such a dictionary at a given type. Passing the dictionaries in at different points in your program now leads to programs with greatly differing behavior. You may or may not have to remember the dictionaries your data type was constructed with, and woe betide you if you want to have conditional behavior based on what your arguments are.
For simple examples like Set you can get away with a manual dictionary translation. The price doesn't seem so high. You have to bake in the dictionary for, say, how you want to sort the Set when you make the object and then insert/lookup, would just preserve your choice. This might be a cost you can bear. When you union two Sets now, of course, its up in the air which ordering you get. Maybe you take the smaller and insert it into the larger, but then the ordering would change willy nilly, so instead you have to take say, the left and always insert it into the right, or document this haphazard behavior. You're now being forced into suboptimal performing solutions in the interest of 'flexibility'.
But Set is a trivial example. There you might bake an index into the type about which instance it was you are using, there is only one class involved. What happens when you want more complex behavior? One of the things we do with Haskell is work with monad transformers. Now you have lots of instances floating around -- and you don't have a good place to store them all, MonadReader, MonadWriter, MonadState, etc. may all apply.. conditionally, based on the underlying monad. what happens when you hoist and swap it out and now different things may or may not apply?
Carrying around an explicit dictionaries for this is a lot of work, there isn't a good place to store them and you are asking users to adopt a global program transformation to adopt this practice.
These are the things that typeclasses make effortless.
Do I believe you should use them for everything?
Not by a long shot.
But I can't agree with the other replies here that they are inessential to Haskell.
Haskell is the only language that supplies them and they are critical to at least my ability to think in this language, and are a huge part of why I consider Haskell home.
I do agree with a few things here, use typeclasses when there are laws and when the choice is unambiguous.
I'd challenge however, that if you don't have laws or if the choice isn't unambiguous, you may not know enough about how to model the problem domain, and should be seeking something for which you can fit it into the typeclass mold, possibly even into existing abstractions -- and when you finally find that solution, you'll find you can easily reuse it.
Typeclasses are, in most cases, inessential. Any typeclass code can be mechanically converted into dictionary-passing style. They mainly provide convenience, sometimes an essential amount of convenience (cf. kmett's answer).
Sometimes the single-instance property of typeclasses is used to enforce invariants. For example, you could not convert Data.Set into dictionary-passing style safely, because if you inserted twice with two different Ord dictionaries, you could break the data structure invariant. Of course you could still convert any working code to working code in dictionary-passing style, but you would not be able to outlaw as much broken code.
Laws are another important cultural aspect to typeclasses. The compiler does not enforce laws, but Haskell programmers expect typeclasses to come with laws that all the instances satisfy. This can be leveraged to provide stonger guarantees about some functions. This advantage comes only from the conventions of the community, and is not a formal property of a language.
To answer that part of the question:
"typeclasses and data/newtype somewhat competing concepts"
No. Typeclasses are an extension to the type system, that allows you to make constraints on polymorphic arguments. Like most things in programming, they are, of course, syntactic sugar [so they aren't essential in the sense that their use can't be replaced by anything else]. That doesn't mean they're superfluous. It just means you could express similar things using other language facilities, but you'd lose some clarity while you're at it. Dictionary passing can be used for mostly the same things, but it's ultimately less strict in the type system because it allows changing behavior at runtime (which is also an excellent example of where you'd use dictionary passing instead of type classes).
Data and newtype still mean exactly the same thing whether you have typeclasses or not: Introduce a new type, in the case of data as new kind of data structure, and in case of newtype as a typesafe variant of type.
To expand slightly on my comment I would suggest always starting by using data and dictionary passing. If the boilerplate and manual instance plumbing becomes too much to bear then consider introducing a typeclass. I suspect this approach generally leads to a cleaner design.
I just want to make a really mundane point about syntax.
People tend to underestimate the convenience afforded by type classes, probably because they have never tried Haskell without using any. This is a "the grass is greener on the other side of the fence" sort of phenomenon.
while :: Monad m -> m Bool -> m a -> m ()
while m p body = (>>=) m p $ \x ->
if x
then (>>) m body (while m p body)
else return m ()
average :: Floating a -> a -> a -> a -> a
average f a b c = (/) f ((+) (floatingToNum f) a ((+) (floatingToNum f) b c))
(fromInteger (floatingToNum f) 3)
This is the historical motivation for type classes and it remains valid today. If we didn't have type classes, we'd certainly need some kind of replacement for it to avoid writing monstrosities like these. (Maybe something like record puns or Agda's "open".)
I know that typeclasses can make a few things a bit prettier, but not much AFIK.
Bit prettier?? No! Way prettier! (as others have already noted)
However the answer to this really depends very much where this question comes from.
If Haskell is your tool of choice for serious software engineering, typeclasses are
powerful and essential.
If you are a beginner using haskell to learn (functional) programming, the complexity and difficulty of typeclasses can outweigh the advantages – certainly at the beginning of your studies.
Here are a couple of examples comparing ghc with gofer (predecessor of hugs,
predecessor of modern haskell):
gofer
? 1 ++ [2,3,4]
ERROR: Type error in application
*** expression :: 1 ++ [2,3,4]
*** term :: 1
*** type :: Int
*** does not match :: [Int]
Now compare with ghc:
Prelude> 1 ++ [2,3,4]
:2:1:
No instance for (Num [a0]) arising from the literal `1'
Possible fix: add an instance declaration for (Num [a0])
In the first argument of `(++)', namely `1'
In the expression: 1 ++ [2, 3, 4]
In an equation for `it': it = 1 ++ [2, 3, 4]
:2:7:
No instance for (Num a0) arising from the literal `2'
The type variable `a0' is ambiguous
Possible fix: add a type signature that fixes these type variable(s)
Note: there are several potential instances:
instance Num Double -- Defined in `GHC.Float'
instance Num Float -- Defined in `GHC.Float'
instance Integral a => Num (GHC.Real.Ratio a)
-- Defined in `GHC.Real'
...plus three others
In the expression: 2
In the second argument of `(++)', namely `[2, 3, 4]'
In the expression: 1 ++ [2, 3, 4]
This should suggest that error-message-wise, not only are typeclasses not prettier, they can be uglier!
One can go all the way (in gofer) and use the 'simple prelude' that uses
no typeclasses at all. This makes it quite unrealistic for serious programming
but real neat for wrapping your head round Hindley-Milner:
Standard Prelude
? :t (==)
(==) :: Eq a => a -> a -> Bool
? :t (+)
(+) :: Num a => a -> a -> a
Simple Prelude
? :t (==)
(==) :: a -> a -> Bool
? :t (+)
(+) :: Int -> Int -> Int

Are there type signatures which Haskell can't verify?

This paper establishes that type inference (called "typability" in the paper) in System F is undecidable. What I've never heard mentioned elsewhere is the second result of the paper, namely that "type checking" in F is also undecidable. Here the "type checking" question means: given a term t, type T and typing environment A, is the judgment A ⊢ t : T derivable? That this question is undecidable (and that it's equivalent to the question of typability) is surprising to me, because it seems intuitively like it should be an easier question to answer.
But in any case, given that Haskell is based on System F (or F-omega, even), the result about type checking would seem to suggest that there is a Haskell term t and type T such that the compiler would be unable to decide whether t :: T is valid. If that's the case, I'm curious what such a term and type are... if it's not the case, what am I misunderstanding?
Presumably comprehending the paper would lead to a constructive answer, but I'm a little out of my depth :)
Type checking can be made decidable by enriching the syntax appropriately. For example, in the paper, we have lambdas written as \x -> e; to type-check this, you must guess the type of x. However, with a suitably enriched syntax, this can be written as \x :: t -> e instead, which takes the guess-work out of the process. Similarly, in the paper, they allow type-level lambdas to be implicit; that is, if e :: t, then also e :: forall a. t. To do typechecking, you have to guess when and how many forall's to add, and when to eliminate them. As before, you can make this more deterministic by adding syntax: we add two new expression forms /\a. e and e [t] and two new typing rule that says if e :: t, then /\a. e :: forall a. t, and if e :: forall a. t, then e [t'] :: t [t' / a] (where the brackets in t [t' / a] are substitution brackets). Then the syntax tells us when and how many foralls to add, and when to eliminate them as well.
So the question is: can we go from Haskell to sufficiently-annotated System F terms? And the answer is yes, thanks to a few critical restrictions placed by the Haskell type system. The most critical is that all types are rank one*. Without going into too much detail, "rank" is related to how many times you have to go to the left of an -> constructor to find a forall.
Int -> Bool -- rank 0?
forall a. (a -> a) -- rank 1
(forall a. a -> a) -> (forall a. a -> a) -- rank 2
In particular, this restricts polymorphism a bit. We can't type something like this with rank one types:
foo :: (forall a. a -> a) -> (String, Bool) -- a rank-2 type
foo polymorphicId = (polymorphicId "hey", polymorphicId True)
The next most critical restriction is that type variables can only be replaced by monomorphic types. (This includes other type variables, like a, but not polymorphic types like forall a. a.) This ensures in part that type substitution preserves rank-one-ness.
It turns out that if you make these two restrictions, then not only is type-inference decidable, but you also get minimal types.
If we turn from Haskell to GHC, then we can talk not only about what is typable, but how the inference algorithm looks. In particular, in GHC, there are extensions that relax the above two restrictions; how does GHC do inference in that setting? Well, the answer is that it simply doesn't even try. If you want to write terms using those features, then you must add the typing annotations we talked about all the way back in paragraph one: you must explicitly annotate where foralls get introduced and eliminated. So, can we write a term that GHC's type-checker rejects? Yes, it's easy: simply use un-annotated rank-two (or higher) types or impredicativity. For example, the following doesn't type-check, even though it has an explicit type annotation and is typable with rank-two types:
{-# LANGUAGE Rank2Types #-}
foo :: (String, Bool)
foo = (\f -> (f "hey", f True)) id
* Actually, restricting to rank two is enough to make it decidable, but the algorithm for rank one types can be more efficient. Rank three types already give the programmer enough rope to make the inference problem undecidable. I'm not sure whether these facts were known at the time that the committee chose to restrict Haskell to rank-one types.
Here is an example for a type level implementation of the SKI calculus in Scala: http://michid.wordpress.com/2010/01/29/scala-type-level-encoding-of-the-ski-calculus/
The last example shows an unbounded iteration. If you do the same in Haskell (and I'm pretty sure you can), you have an example for an "untypeable expression".

Lambda for type expressions in Haskell?

Does Haskell, or a specific compiler, have anything like type-level lambdas (if that's even a term)?
To elaborate, say I have a parametrized type Foo a b and want Foo _ b to be an instance of, say, Functor. Is there any mechanism that would let me do something akin to
instance Functor (\a -> Foo a b) where
...
?
While sclv answered your direct question, I'll add as an aside that there's more than one possible meaning for "type-level lambda". Haskell has a variety of type operators but none really behave as proper lambdas:
Type constructors: Abstract type operators that introduce new types. Given a type A and a type constructor F, the function application F A is also a type but carries no further (type level) information than "this is F applied to A".
Polymorphic types: A type like a -> b -> a implicitly means forall a b. a -> b -> a. The forall binds the type variables within its scope, thus behaving somewhat like a lambda. If memory serves me this is roughly the "capital lambda" in System F.
Type synonyms: A limited form of type operators that must be fully applied, and can produce only base types and type constructors.
Type classes: Essentially functions from types/type constructors to values, with the ability to inspect the type argument (i.e., by pattern matching on type constructors in roughly the same way that regular functions pattern match on data constructors) and serving to define a membership predicate on types. These behave more like a regular function in some ways, but are very limited: type classes aren't first-class entities that can be manipulated, and they operate on types only as input (not output) and values only as output (definitely not input).
Functional dependencies: Along with some other extensions, these allow type classes to implicitly produce types as results as well, which can then be used as the parameters to other type classes. Still very limited, e.g. by being unable to take other type classes as arguments.
Type families: An alternate approach to what functional dependencies do; they allow functions on types to be defined in a manner that looks much closer to regular value-level functions. The usual restrictions still apply, however.
Other extensions relax some of the restrictions mentioned, or provide partial workarounds (see also: Oleg's type hackery). However, pretty much the one thing you can't do anywhere in any way is exactly what you were asking about, namely introduce new a binding scope with an anonymous function abstraction.
From TypeCompose:
newtype Flip (~>) b a = Flip { unFlip :: a ~> b }
http://hackage.haskell.org/packages/archive/TypeCompose/0.6.3/doc/html/Control-Compose.html#t:Flip
Also, if something is a Functor in two arguments, you can make it a bifunctor:
http://hackage.haskell.org/packages/archive/category-extras/0.44.4/doc/html/Control-Bifunctor.html
(or, in a later category-extras, a more general version: http://hackage.haskell.org/packages/archive/category-extras/0.53.5/doc/html/Control-Functor.html#t:Bifunctor)
I don't like the idea of answering my own question, but apparently, according to several people on #haskell on Freenode, Haskell doesn't have type-level lambdas.
EHC (and perhaps also its successor, UHC) has type-level lambdas, but they are undocumented and not as powerful as in a dependently-typed language. I recommend you use a dependently-typed language such as Agda (similar to Haskell) or Coq (different, but still pure functional at its core, and can be interpreted and compiled either lazily or strictly!) But I'm biased towards such languages, and this is probably 100x overkill for what you are asking for here!
The closest I know of to get a type lambda is by defining a type synonym. In your example,
data Foo a b = Foo a b
type FooR a b = Foo b a
instance Functor (FooR Int) where
...
But even with -XTypeSynonymInstances -XFlexibleInstances this doesn't work; GHC expects the type syn to be fully applied in the instance head. There may be some way to arrange it with type families.
Yeah, what Gabe said, which is somewhat answered by type families:
http://www.haskell.org/haskellwiki/GHC/Type_families
Depending on the situation, you could replace your original type definition with a "flipped" version, and then make a type synonym for the "correct" version.
From
data X a b = Y a b
instance Functor (\a -> X a b) where ...
to
data XFlip b a = Y a b -- Use me for instance decalarations
type X a b = XFlip b a -- Use me for everything else
instance Functor XFlip where ...

Resources