Why do we need Control.Lens.Reified? - haskell

Why do we need Control.Lens.Reified? Is there some reason I can't place a Lens directly into a container? What does reify mean anyway?

We need reified lenses because Haskell's type system is predicative. I don't know the technical details of exactly what that means, but it prohibits types like
[Lens s t a b]
For some purposes, it's acceptable to use
Functor f => [(a -> f b) -> s -> f t]
instead, but when you reach into that, you don't get a Lens; you get a LensLike specialized to some functor or another. The ReifiedBlah newtypes let you hang on to the full polymorphism.
Operationally, [ReifiedLens s t a b] is a list of functions each of which takes a Functor f dictionary, while forall f . Functor f => [LensLike f s t a b] is a function that takes a Functor f dictionary and returns a list.
As for what "reify" means, well, the dictionary will say something, and that seems to translate into a rather stunning variety of specific meanings in Haskell. So no comment on that.

The problem is that, in Haskell, type abstraction and application are completely implicit; the compiler is supposed to insert them where needed. Various attempts at designing 'impredicative' extensions, where the compiler would make clever guesses where to put them, have failed; so the safest thing ends up being relying on the Haskell 98 rules:
Type abstractions occur only at the top level of a function definition.
Type applications occur immediately whenever a variable with a polymorphic type is used in an expression.
So if I define a simple lens:[1]
lensHead f [] = pure []
lensHead f (x:xn) = (:xn) <$> f x
and use it in an expression:
[lensHead]
lensHead gets automatically applied to some set of type parameters; at which point it's no longer a lens, because it's not polymorphic in the functor anymore. The take-away is: an expression always has some monomorphic type; so it's not a lens. (You'll note that the lens functions take arguments of type Getter and Setter, which are monomorphic types, for similar reasons to this. But a [Getter s a] isn't a list of lenses, because they've been specialized to only getters.)
What does reify mean? The dictionary definition is 'make real'. 'Reifying' is used in philosophy to refer to the act of regarding or treating something as real (rather than ideal or abstract). In programming, it tends to refer to taking something that normally can't be treated as a data structure and representing it as one. For example, in really old Lisps, there didn't use to be first-class functions; instead, you had to use S-Expressions to pass 'functions' around, and eval them when you needed to call the function. The S-Expressions represented the functions in a way you could manipulate in the program, which is referred to as reification.
In Haskell, we don't typically need such elaborate reification strategies as Lisp S-Expressions, partly because the language is designed to avoid needing them; but since
newtype ReifiedLens s t a b = ReifiedLens (Lens s t a b)
has the same effect of taking a polymorphic value and turning it into a true first-class value, it's referred to as reification.
Why does this work, if expressions always have monomorphic types? Well, because the Rank2Types extension adds a third rule:
Type abstractions occur at the top-level of the arguments to certain functions, with so-called rank 2 types.
ReifiedLens is such a rank-2 function; so when you say
ReifiedLens l
you get a type lambda around the argument to ReifiedLens, and then l is applied immediately to the the lambda-bound type argument. So l is effectively just eta-expanded. (Compilers are free to eta-reduce this and just use l directly).
Then, when you say
f (ReifiedLens l) = ...
on the right-hand side, l is a variable with polymorphic type, so every use of l is immediately implicitly assigned to whatever type arguments are needed for the expression to type-check. So everything works the way you expect.
The other way to think about is that, if you say
newtype ReifiedLens s t a b = ReifiedLens { unReify :: Lens s t a b }
the two functions ReifiedLens and unReify act like explicit type abstraction and application operators; this allows the compiler to identify where you want the abstractions and applications to take place well enough that the issues with impredicative type systems don't come up.
[1] In lens terminology, this is apparently called something other than a 'lens'; my entire knowledge of lenses comes from SPJ's presentation on them so I have no way to verify that. The point remains, since the polymorphism is still necessary to make it work as both a getter and a setter.

Related

Isomorphism in newtype

I'm trying to understand State newtype and I'm struggling with this explanation of the isomorphism in a book:
Newtypes must have the same underlying representation as the type they wrap, as the newtype wrapper disappears at compile time. So the function contained in the newtype must be isomorphic to the type it wraps. That is, there must be a way to go from the newtype to the thing it wraps and back again without losing information.
What does it mean applied to State newtype?
newtype State s a = State { runState :: s -> (a, s) }
That explanation "there must be a way to go from the newtype to the thing it wraps and back again" isn't clear.
Also, can you please say, where there is an isomorphism in this examples, where is not and why.
type Iso a b = (a -> b, b -> a)
newtype Sum a = Sum { getSum :: a }
sumIsIsomorphicWithItsContents :: Iso a (Sum a)
sumIsIsomorphicWithItsContents = (Sum, getSum)
(a -> Maybe b, b -> Maybe a)
[a] -> a, a -> [a]
The statement you quote makes no mention of State specifically. It is purely a statement about newtypes. It is a little misleading in referring to "the function contained in the newtype" because there is no requirement for the type wrapped by a newtype to be a function type - although this is the case for State and many other commonly used types defined by newtype.
The key thing for a newtype in general is exactly as it says: it has to simply wrap another type in a way that makes it trivial to go from the wrapped type to the wrapping one, and vice versa, with no loss of information - this is what it means for two types to be isomorphic, and also what makes it completely safe for the two types to have identical runtime representations.
It's easy to demonstrate typical data declarations that could not possibly fulfil this. For example take any type with 2 constructors, such as Either:
data Either a b = Left a | Right b
It's obvious that this is not isomorphic to either of its constituent types. For example, the Left constructor embeds a inside Either a b, but you can't get any of the Right values this way.
And even with a single constructor, if it takes more than one argument - such as the tuple constructor (,) - then again, you can embed either of the constituent types (given an arbitrary value of the other type) but you can't possibly get every value.
This is why the newtype keyword is only allowed for types with a single constructor which takes a single argument. This always provides an isomorphism, because given newtype Foo a = Foo a, then Foo constructor and the function \Foo a -> a are trivially inverses of each other. And this works the same for more complicated examples where the type constructor takes more type arguments, and/or where the wrapped type is more complex.
Such is exactly the case with State:
newtype State s a = State {runState :: s -> (a, s)}
The functions State and runState respectively wrap and unwrap the underlying type (which in this case is a function), and clearly are inverse to each other - therefore they provide an isomorphism.
Note finally that there is nothing special here about the use of record syntax in the definition - although it's very common in such cases in order to have an already-named "unwrapping" function. Other than this small convenience there is no difference from a newtype defined without record syntax.
To step back a little: newtype declarations are very similar to data declarations with a single constructor and a single argument - the difference is mainly in performance, as the keyword tells the compiler that the two types are equivalent so that there is no runtime overhead of conversion between the two types, which there otherwise would be. (There is also a difference with regard to laziness but I won't mention that, except here for completeness.) As for why do this rather than just use the underlying type - that's to provide extra type safety (there are 2 different types here for the compiler even though they're the same at runtime), and also allows typeclass instances to be specified without attaching those to the underlying type. Sum and Product are great examples here, as they provide Monoid instances for numeric types, based on addition and multiplication respectively, without giving either the undeserved distinction of being "the" Monoid instance for the underlying type.
And something similar is at work with State - when we use this type we signal explicitly that we're using it to represent state manipulation, which wouldn't be the case if we were just working with ordinary functions that happen to return a pair.

Can a Functor / Applicative be tied to one specific type or structure?

I’m trying to understand the applicative typeclass, and in particular the <*> function. Now I see its type signature is f (a -> b) -> f a -> f b and I take it that f is a functor, which I think of as some kind of structure wrapping some data. It appears to me that f must handle generic types, specifically it must be possible for f to have the parameterized types a, b, and in fact it must also support a -> b.
If my understanding is correct, then what we are doing is working with a typeclass f that is initially intended to wrap some data, like say a list of strings or a tree containing file buffers or whatever random thing we might want. But f must not be committed to any one type, because not only must it handle such a type but it is also required to handle functions from the data to the other data. Thus if we had an example implementation of <*> which contained
Just2 f <*> j = fmap f j
the way to read this is that j is some kind of data inside of a Just2. f is a function which maps data to data. And f is wrapped in a Just2.
Is all of that right? Fundamentally my question is: Must anything applicative be so generic that it can always simultaneously handle arbitrary data and also functions from data to data? Or is there some way that you could have an applicative such that the only data it allows inside is, say, lists?
Yes, your understanding is largely correct. In particular, any specific Applicative, say one named Foo, has an associated specialization of the function pure with type signature:
pure :: a -> Foo a
that must work for any type a selected by the caller, such as:
> pure 10 :: Foo Int
> pure length :: Foo (String -> Int)
So, whatever Foo is, it has to be able to "handle" any provided type without limitations because pure can technically be applied to any type without limitations.
One cautionary note, though. The idea that a functor f "wraps" data, so that f Int is somehow a "container" of Int values, can be a helpful intuition and is often literally correct (e.g., lists, trees, etc.), but it's not always strictly true. (Some counterexamples include the functors IO, (->) r, and Const b, which "contain" values in a very different sense than real containers.)
For "regular" Functors and Applicatives, you're right; they need to be able to handle values of any type. This is known as parametric polymorphism. If you have a type that you think is almost a Functor except that it can't do that, then consider the MonoFunctor typeclass from the mono-traversable package. It's the same idea as Functor, except with a single valid element type baked in. I'm not aware of any packages that have a monomorphic equivalent to Applicative. I think this is because <*> uses values of 3 different types inside the same container, so it doesn't have a good monomorphic analogue.
From your question, it sounds like there could be many different Applicative typeclasses. But there is only one; therefore it has to be generic.
The Applicative typeclass is defined by its functions (<*> and pure) and by the laws these functions need to adhere to. In any Haskell codebase, there can be exactly one definition only. But there can be many types that have an instance of the Applicative typeclass. You can define your own instances with the instance declaration and by defining the required functions. The compiler will not check that your definitions abide by the functor laws, though - this is up to you to ensure.
Typeclasses like Functor, Applicative and Monad do not specify a data type; they don't say that you need a "list-like" type, or a "box-like" type, event though lists and other containers like Either do have instances of these type classes. Any type that you can equip with the required functions in such a way that the Applicative laws hold becomes an Applicative.
It is often helpful to think of containers or boxes. But you need to stretch that intuition once you use Applicative instances of types like functions; e.g., a -> r.
Compared with interfaces in OO-languages, typeclasses are more powerful because you can define some data type to be an instance of a typeclass even if you do not have access to the source code of the type itself.

What is predicativity?

I have pretty decent intuition about types Haskell prohibits as "impredicative": namely ones where a forall appears in an argument to a type constructor other than ->. But just what is predicativity? What makes it important? How does it relate to the word "predicate"?
The central question of these type systems is: "Can you substitute a polymorphic type in for a type variable?". Predicative type systems are the no-nonsense schoolmarm answering, "ABSOLUTELY NOT", while impredicative type systems are your carefree buddy who thinks that sounds like a fun idea and what could possibly go wrong?
Now, Haskell muddies the discussion a bit because it believes polymorphism should be useful but invisible. So for the remainder of this post, I will be writing in a dialect of Haskell where uses of forall are not just allowed but required. This way we can distinguish between the type a, which is a monomorphic type which draws its value from a typing environment that we can define later, and the type forall a. a, which is one of the harder polymorphic types to inhabit. We'll also allow forall to go pretty much anywhere in a type -- as we'll see, GHC restricts its type syntax as a "fail-fast" mechanism rather than as a technical requirement.
Suppose we have told the compiler id :: forall a. a -> a. Can we later ask to use id as if it had type (forall b. b) -> (forall b. b)? Impredicative type systems are okay with this, because we can instantiate the quantifier in id's type to forall b. b, and substitute forall b. b for a everywhere in the result. Predicative type systems are a bit more wary of that: only monomorphic types are allowed in. (So if we had a particular b, we could write id :: b -> b.)
There's a similar story about [] :: forall a. [a] and (:) :: forall a. a -> [a] -> [a]. While your carefree buddy may be okay with [] :: [forall b. b] and (:) :: (forall b. b) -> [forall b. b] -> [forall b. b], the predicative schoolmarm isn't, so much. In fact, as you can see from the only two constructors of lists, there is no way to produce lists containing polymorphic values without instantiating the type variable in their constructors to a polymorphic value. So although the type [forall b. b] is allowed in our dialect of Haskell, it isn't really sensible -- there's no (terminating) terms of that type. This motivates GHC's decision to complain if you even think about such a type -- it's the compiler's way of telling you "don't bother".*
Well, what makes the schoolmarm so strict? As usual, the answer is about keeping type-checking and type-inference doable. Type inference for impredicative types is right out. Type checking seems like it might be possible, but it's bloody complicated and nobody wants to maintain that.
On the other hand, some might object that GHC is perfectly happy with some types that appear to require impredicativity:
> :set -Rank2Types
> :t id :: (forall b. b) -> (forall b. b)
{- no complaint, but very chatty -}
It turns out that some slightly-restricted versions of impredicativity are not too bad: specifically, type-checking higher-rank types (which allow type variables to be substituted by polymorphic types when they are only arguments to (->)) is relatively simple. You do lose type inference above rank-2, and principal types above rank-1, but sometimes higher rank types are just what the doctor ordered.
I don't know about the etymology of the word, though.
* You might wonder whether you can do something like this:
data FooTy a where
FooTm :: FooTy (forall a. a)
Then you would get a term (FooTm) whose type had something polymorphic as an argument to something other than (->) (namely, FooTy), you don't have to cross the schoolmarm to do it, and so the belief "applying non-(->) stuff to polymorphic types isn't useful because you can't make them" would be invalidated. GHC doesn't let you write FooTy, and I will admit I'm not sure whether there's a principled reason for the restriction or not.
(Quick update some years later: there is a good, principled reason that FooTm is still not okay. Namely, the way that GADTs are implemented in GHC is via type equalities, so the expanded type of FooTm is actually FooTm :: forall a. (a ~ forall b. b) => FooTy a. Hence to actually use FooTm, one would indeed need to instantiate a type variable with a polymorphic type. Thanks to Stephanie Weirich for pointing this out to me.)
Let me just add a point regarding the "etymology" issue, since the other answer by #DanielWagner covers much of the technical ground.
A predicate on something like a is a -> Bool. Now a predicate logic is one that can in some sense reason about predicates -- so if we have some predicate P and we can talk about, for a given a, P(a), now in a "predicate logic" (such as first-order logic) we can also say ∀a. P(a). So we can quantify over variables and discuss the behavior of predicates over such things.
Now, in turn, we say a statement is predicative if all of the things a predicate is applied to are introduced prior to it. So statements are "predicated on" things that already exist. In turn, a statement is impredicative if it can in some sense refer to itself by its "bootstraps".
So in the case of e.g. the id example above, we find that we can give a type to id such that it takes something of the type of id to something else of the type of id. So now we can give a function a type where an quantified variable (introduced by forall a.) can "expand" to be the same type as that of the entire function itself!
Hence impredicativity introduces a possibility of a certain "self reference". But wait, you might say, wouldn't such a thing lead to contradiction? The answer is: "well, sometimes." In particular, "System F" which is the polymorphic lambda calculus and the essential "core" of GHC's "core" language allows a form of impredicativity that nonetheless has two levels -- the value level, and the type level, which is allowed to quantify over itself. In this two-level stratification, we can have impredicativity and not contradiction/paradox.
Although note that this neat trick is very delicate and easy to screw up by the addition of more features, as this collection of articles by Oleg indicates: http://okmij.org/ftp/Haskell/impredicativity-bites.html
I'd like to make a comment on the etymology issue, since #sclv's answer isn't quite right (etymologically, not conceptually).
Go back in time, to the days of Russell when everything is set theory— including logic. One of the logical notions of particular import is the "principle of comprehension"; that is, given some logical predicate φ:A→2 we would like to have some principle to determine the set of all elements satisfying that predicate, written as "{x | φ(x) }" or some variation thereon. The key point to bear in mind is that "sets" and "predicates" are viewed as being fundamentally different things: predicates are mappings from objects to truth values, and sets are objects. Thus, for example, we may allow quantifying over sets but not quantifying over predicates.
Now, Russell was rather concerned by his eponymous paradox, and sought some way to get rid of it. There are numerous fixes, but the one of interest here is to restrict the principle of comprehension. But first, the formal definition of the principle: ∃S.∀x.S x ↔︎ φ(x); that is, for our particular φ there exists some object (i.e., set) S such that for every object (also a set, but thought of as an element) x, we have that S x (you can think of this as meaning "x∈S", though logicians of the time gave "∈" a different meaning than mere juxtaposition) is true just in case φ(x) is true. If we take the principle exactly as written then we end up with an impredicative theory. However, we can place restrictions on which φ we're allowed to take the comprehension of. (For example, if we say that φ must not contain any second-order quantifiers.) Thus, for any restriction R, if a set S is determined (i.e., generated via comprehension) by some R-predicate, then we say that S is "R-predicative". If every set in our language is R-predicative then we say that our language is "R-predicative". And then, as is often the case with hyphenated prefix things, the prefix gets dropped off and left implicit, whence "predicative" languages. And, naturally, languages which are not predicative are "impredicative".
That's the old school etymology. Since those days the terms have gone off and gotten lives of their own. The ways we use "predicative" and "impredicative" today are quite different, because the things we're concerned about have changed. So it can sometimes be a bit hard to see how the heck our modern usage ties back to this stuff. Honestly, I don't think knowing the etymology really helps any in terms of figuring out what the words are really about (these days).

Given a Haskell type signature, is it possible to generate the code automatically?

What it says in the title. If I write a type signature, is it possible to algorithmically generate an expression which has that type signature?
It seems plausible that it might be possible to do this. We already know that if the type is a special-case of a library function's type signature, Hoogle can find that function algorithmically. On the other hand, many simple problems relating to general expressions are actually unsolvable (e.g., it is impossible to know if two functions do the same thing), so it's hardly implausible that this is one of them.
It's probably bad form to ask several questions all at once, but I'd like to know:
Can it be done?
If so, how?
If not, are there any restricted situations where it becomes possible?
It's quite possible for two distinct expressions to have the same type signature. Can you compute all of them? Or even some of them?
Does anybody have working code which does this stuff for real?
Djinn does this for a restricted subset of Haskell types, corresponding to a first-order logic. It can't manage recursive types or types that require recursion to implement, though; so, for instance, it can't write a term of type (a -> a) -> a (the type of fix), which corresponds to the proposition "if a implies a, then a", which is clearly false; you can use it to prove anything. Indeed, this is why fix gives rise to ⊥.
If you do allow fix, then writing a program to give a term of any type is trivial; the program would simply print fix id for every type.
Djinn is mostly a toy, but it can do some fun things, like deriving the correct Monad instances for Reader and Cont given the types of return and (>>=). You can try it out by installing the djinn package, or using lambdabot, which integrates it as the #djinn command.
Oleg at okmij.org has an implementation of this. There is a short introduction here but the literate Haskell source contains the details and the description of the process. (I'm not sure how this corresponds to Djinn in power, but it is another example.)
There are cases where is no unique function:
fst', snd' :: (a, a) -> a
fst' (a,_) = a
snd' (_,b) = b
Not only this; there are cases where there are an infinite number of functions:
list0, list1, list2 :: [a] -> a
list0 l = l !! 0
list1 l = l !! 1
list2 l = l !! 2
-- etc.
-- Or
mkList0, mkList1, mkList2 :: a -> [a]
mkList0 _ = []
mkList1 a = [a]
mkList2 a = [a,a]
-- etc.
(If you only want total functions, then consider [a] as restricted to infinite lists for list0, list1 etc, i.e. data List a = Cons a (List a))
In fact, if you have recursive types, any types involving these correspond to an infinite number of functions. However, at least in the case above, there is a countable number of functions, so it is possible to create an (infinite) list containing all of them. But, I think the type [a] -> [a] corresponds to an uncountably infinite number of functions (again restrict [a] to infinite lists) so you can't even enumerate them all!
(Summary: there are types that correspond to a finite, countably infinite and uncountably infinite number of functions.)
This is impossible in general (and for languages like Haskell that does not even has the strong normalization property), and only possible in some (very) special cases (and for more restricted languages), such as when a codomain type has the only one constructor (for example, a function f :: forall a. a -> () can be determined uniquely). In order to reduce a set of possible definitions for a given signature to a singleton set with just one definition need to give more restrictions (in the form of additional properties, for example, it is still difficult to imagine how this can be helpful without giving an example of use).
From the (n-)categorical point of view types corresponds to objects, terms corresponds to arrows (constructors also corresponds to arrows), and function definitions corresponds to 2-arrows. The question is analogous to the question of whether one can construct a 2-category with the required properties by specifying only a set of objects. It's impossible since you need either an explicit construction for arrows and 2-arrows (i.e., writing terms and definitions), or deductive system which allows to deduce the necessary structure using a certain set of properties (that still need to be defined explicitly).
There is also an interesting question: given an ADT (i.e., subcategory of Hask) is it possible to automatically derive instances for Typeable, Data (yes, using SYB), Traversable, Foldable, Functor, Pointed, Applicative, Monad, etc (?). In this case, we have the necessary signatures as well as additional properties (for example, the monad laws, although these properties can not be expressed in Haskell, but they can be expressed in a language with dependent types). There is some interesting constructions:
http://ulissesaraujo.wordpress.com/2007/12/19/catamorphisms-in-haskell
which shows what can be done for the list ADT.
The question is actually rather deep and I'm not sure of the answer, if you're asking about the full glory of Haskell types including type families, GADT's, etc.
What you're asking is whether a program can automatically prove that an arbitrary type is inhabited (contains a value) by exhibiting such a value. A principle called the Curry-Howard Correspondence says that types can be interpreted as mathematical propositions, and the type is inhabited if the proposition is constructively provable. So you're asking if there is a program that can prove a certain class of propositions to be theorems. In a language like Agda, the type system is powerful enough to express arbitrary mathematical propositions, and proving arbitrary ones is undecidable by Gödel's incompleteness theorem. On the other hand, if you drop down to (say) pure Hindley-Milner, you get a much weaker and (I think) decidable system. With Haskell 98, I'm not sure, because type classes are supposed to be able to be equivalent to GADT's.
With GADT's, I don't know if it's decidable or not, though maybe some more knowledgeable folks here would know right away. For example it might be possible to encode the halting problem for a given Turing machine as a GADT, so there is a value of that type iff the machine halts. In that case, inhabitability is clearly undecidable. But, maybe such an encoding isn't quite possible, even with type families. I'm not currently fluent enough in this subject for it to be obvious to me either way, though as I said, maybe someone else here knows the answer.
(Update:) Oh a much simpler interpretation of your question occurs to me: you may be asking if every Haskell type is inhabited. The answer is obviously not. Consider the polymorphic type
a -> b
There is no function with that signature (not counting something like unsafeCoerce, which makes the type system inconsistent).

Lambda for type expressions in Haskell?

Does Haskell, or a specific compiler, have anything like type-level lambdas (if that's even a term)?
To elaborate, say I have a parametrized type Foo a b and want Foo _ b to be an instance of, say, Functor. Is there any mechanism that would let me do something akin to
instance Functor (\a -> Foo a b) where
...
?
While sclv answered your direct question, I'll add as an aside that there's more than one possible meaning for "type-level lambda". Haskell has a variety of type operators but none really behave as proper lambdas:
Type constructors: Abstract type operators that introduce new types. Given a type A and a type constructor F, the function application F A is also a type but carries no further (type level) information than "this is F applied to A".
Polymorphic types: A type like a -> b -> a implicitly means forall a b. a -> b -> a. The forall binds the type variables within its scope, thus behaving somewhat like a lambda. If memory serves me this is roughly the "capital lambda" in System F.
Type synonyms: A limited form of type operators that must be fully applied, and can produce only base types and type constructors.
Type classes: Essentially functions from types/type constructors to values, with the ability to inspect the type argument (i.e., by pattern matching on type constructors in roughly the same way that regular functions pattern match on data constructors) and serving to define a membership predicate on types. These behave more like a regular function in some ways, but are very limited: type classes aren't first-class entities that can be manipulated, and they operate on types only as input (not output) and values only as output (definitely not input).
Functional dependencies: Along with some other extensions, these allow type classes to implicitly produce types as results as well, which can then be used as the parameters to other type classes. Still very limited, e.g. by being unable to take other type classes as arguments.
Type families: An alternate approach to what functional dependencies do; they allow functions on types to be defined in a manner that looks much closer to regular value-level functions. The usual restrictions still apply, however.
Other extensions relax some of the restrictions mentioned, or provide partial workarounds (see also: Oleg's type hackery). However, pretty much the one thing you can't do anywhere in any way is exactly what you were asking about, namely introduce new a binding scope with an anonymous function abstraction.
From TypeCompose:
newtype Flip (~>) b a = Flip { unFlip :: a ~> b }
http://hackage.haskell.org/packages/archive/TypeCompose/0.6.3/doc/html/Control-Compose.html#t:Flip
Also, if something is a Functor in two arguments, you can make it a bifunctor:
http://hackage.haskell.org/packages/archive/category-extras/0.44.4/doc/html/Control-Bifunctor.html
(or, in a later category-extras, a more general version: http://hackage.haskell.org/packages/archive/category-extras/0.53.5/doc/html/Control-Functor.html#t:Bifunctor)
I don't like the idea of answering my own question, but apparently, according to several people on #haskell on Freenode, Haskell doesn't have type-level lambdas.
EHC (and perhaps also its successor, UHC) has type-level lambdas, but they are undocumented and not as powerful as in a dependently-typed language. I recommend you use a dependently-typed language such as Agda (similar to Haskell) or Coq (different, but still pure functional at its core, and can be interpreted and compiled either lazily or strictly!) But I'm biased towards such languages, and this is probably 100x overkill for what you are asking for here!
The closest I know of to get a type lambda is by defining a type synonym. In your example,
data Foo a b = Foo a b
type FooR a b = Foo b a
instance Functor (FooR Int) where
...
But even with -XTypeSynonymInstances -XFlexibleInstances this doesn't work; GHC expects the type syn to be fully applied in the instance head. There may be some way to arrange it with type families.
Yeah, what Gabe said, which is somewhat answered by type families:
http://www.haskell.org/haskellwiki/GHC/Type_families
Depending on the situation, you could replace your original type definition with a "flipped" version, and then make a type synonym for the "correct" version.
From
data X a b = Y a b
instance Functor (\a -> X a b) where ...
to
data XFlip b a = Y a b -- Use me for instance decalarations
type X a b = XFlip b a -- Use me for everything else
instance Functor XFlip where ...

Resources